Re: [openstack-dev] [nova] NUMA-aware live migration: easy but incomplete vs complete but hard

2018-06-21 Thread Mooney, Sean K


> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Thursday, June 21, 2018 2:37 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] NUMA-aware live migration: easy but
> incomplete vs complete but hard
> 
> On 06/18/2018 10:16 AM, Artom Lifshitz wrote:
> > Hey all,
> >
> > For Rocky I'm trying to get live migration to work properly for
> > instances that have a NUMA topology [1].
> >
> > A question that came up on one of patches [2] is how to handle
> > resources claims on the destination, or indeed whether to handle that
> > at all.
> >
> > The previous attempt's approach [3] (call it A) was to use the
> > resource tracker. This is race-free and the "correct" way to do it,
> > but the code is pretty opaque and not easily reviewable, as evidenced
> > by [3] sitting in review purgatory for literally years.
> >
> > A simpler approach (call it B) is to ignore resource claims entirely
> > for now and wait for NUMA in placement to land in order to handle it
> > that way. This is obviously race-prone and not the "correct" way of
> > doing it, but the code would be relatively easy to review.
> >
> > For the longest time, live migration did not keep track of resources
> > (until it started updating placement allocations). The message to
> > operators was essentially "we're giving you this massive hammer,
> don't
> > break your fingers." Continuing to ignore resource claims for now is
> > just maintaining the status quo. In addition, there is value in
> > improving NUMA live migration *now*, even if the improvement is
> > incomplete because it's missing resource claims. "Best is the enemy
> of
> > good" and all that. Finally, making use of the resource tracker is
> > just work that we know will get thrown out once we start using
> > placement for NUMA resources.
> >
> > For all those reasons, I would favor approach B, but I wanted to ask
> > the community for their thoughts.
> 
> Side question... does either approach touch PCI device management
> during live migration?
> 
> I ask because the only workloads I've ever seen that pin guest vCPU
> threads to specific host processors -- or make use of huge pages
> consumed from a specific host NUMA node -- have also made use of SR-IOV
> and/or PCI passthrough. [1]
> 
> If workloads that use PCI passthrough or SR-IOV VFs cannot be live
> migrated (due to existing complications in the lower-level virt layers)
> I don't see much of a point spending lots of developer resources trying
> to "fix" this situation when in the real world, only a mythical
> workload that uses CPU pinning or huge pages but *doesn't* use PCI
> passthrough or SR-IOV VFs would be helped by it.
> 
> Best,
> -jay
> 
> [1 I know I'm only one person, but every workload I've seen that
> requires pinned CPUs and/or huge pages is a VNF that has been
> essentially an ASIC that a telco OEM/vendor has converted into software
> and requires the same guarantees that the ASIC and custom hardware gave
> the original hardware-based workload. These VNFs, every single one of
> them, used either PCI passthrough or SR-IOV VFs to handle latency-
> sensitive network I/O.
[Mooney, Sean K]  I would generally agree but with the extention of include 
dpdk based vswitch like ovs-dpdk or vpp.
Cpu pinned or hugepage backed guests generally also have some kind of high 
performance networking solution or use a hardware
Acclaortor like a gpu to justify the performance assertion that pinning of 
cores or ram is required.
Dpdk networking stack would however not require the pci remaping to be 
addressed though I belive that is planned to be added in stine.
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources

2018-04-18 Thread Mooney, Sean K


> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Wednesday, April 18, 2018 3:39 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [placement][nova] Decision time on
> granular request groups for like resources
> 
> On 04/18/2018 10:30 AM, Eric Fried wrote:
> > Thanks for describing the proposals clearly and concisely, Jay.
> >
> > My preamble would have been that we need to support two use cases:
> >
> > - "explicit anti-affinity": make sure certain parts of my request
> land
> > on *different* providers;
> > - "any fit": make sure my instance lands *somewhere*.
> >
[Mooney, Sean K] for completeness we must also support explicit affinity also
So the tree cases are 
"explicit anti-affinity": make sure certain parts of my request land 
  on *different* providers in the same tree
  think VFs for bonded ports.
"explicit affinity":  make sure certain parts of my request land 
  on the same providers in the same tree.
  This is the numa affinity case for ram and cpus.
"any fit":make sure my instance lands *somewhere* with in
  the same tree.

We have to also be aware of the implication for sharing resource providers here
Too as with jays approach you cannot mix shared and non-shared in a request
Numbered request group. With eric's proposal I believe you can have allocation 
within
A numbered request group come from sharing providers and local providers 
assuming you
Do not use traits to confine that behavior. 
 
> > Both proposals address both use cases, but in different ways.
> 
> Right.
> 
> It's important to point out when we say "different providers" in this
> ML post, we are specifically referring to different providers *within a
> tree of providers*. We are not referring to completely separate compute
> hosts. We are referring to things like multiple NUMA cells that expose
> CPU resources on a single compute host or multiple SR-IOV-enabled
> physical functions that expose SR-IOV VFs for use by guests.
> 
> Best.
> -jay
> 
> >> "By default, should resources/traits submitted in different numbered
> >> request groups be supplied by separate resource providers?"
> >
> > I agree this question needs to be answered, but that won't
> necessarily
> > inform which path we choose.  Viewpoint B [3] is set up to go either
> > way: either we're unrestricted by default and use a queryparam to
> > force separation; or we're split by default and use a queryparam to
> > allow the unrestricted behavior.
> >
> > Otherwise I agree with everything Jay said.
> >
> > -efried
> >
> > On 04/18/2018 09:06 AM, Jay Pipes wrote:
> >> Stackers,
> >>
> >> Eric Fried and I are currently at an impasse regarding a decision
> >> that will have far-reaching (and end-user facing) impacts to the
> >> placement API and how nova interacts with the placement service from
> >> the nova scheduler.
> >>
> >> We need to make a decision regarding the following question:
> >>
> >>
> >> There are two competing proposals right now (both being amendments
> to
> >> the original granular request groups spec [1]) which outline two
> >> different viewpoints.
> >>
> >> Viewpoint A [2], from me, is that like resources listed in different
> >> granular request groups should mean that those resources will be
> >> sourced from *different* resource providers.
> >>
> >> In other words, if I issue the following request:
> >>
> >> GET /allocation_candidates?resources1=VCPU:1=VCPU:1
> >>
> >> Then I am assured of getting allocation candidates that contain 2
> >> distinct resource providers consuming 1 VCPU from each provider.
> >>
> >> Viewpoint B [3], from Eric, is that like resources listed in
> >> different granular request groups should not necessarily mean that
> >> those resources will be sourced from different resource providers.
> >> They *could* be sourced from different providers, or they could be
> >> sourced from the same provider.
> >>
> >> Both proposals include ways to specify whether certain resources or
> >> whole request groups can be forced to be sources from either a
> single
> >> provider or from different providers.
> >>
> >> In Viewpoint A, the proposal is to have a
> >> can_split=RESOURCE1,RESOURCE2 query parameter that would indicat

Re: [openstack-dev] [Nova] [Cyborg] Tracking multiple functions

2018-03-06 Thread Mooney, Sean K


From: Matthew Booth [mailto:mbo...@redhat.com]
Sent: Saturday, March 3, 2018 4:15 PM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [Nova] [Cyborg] Tracking multiple functions

On 2 March 2018 at 14:31, Jay Pipes 
<jaypi...@gmail.com<mailto:jaypi...@gmail.com>> wrote:
On 03/02/2018 02:00 PM, Nadathur, Sundar wrote:
Hello Nova team,

 During the Cyborg discussion at Rocky PTG, we proposed a flow for FPGAs 
wherein the request spec asks for a device type as a resource class, and 
optionally a function (such as encryption) in the extra specs. This does not 
seem to work well for the usage model that I’ll describe below.

An FPGA device may implement more than one function. For example, it may 
implement both compression and encryption. Say a cluster has 10 devices of 
device type X, and each of them is programmed to offer 2 instances of function 
A and 4 instances of function B. More specifically, the device may implement 6 
PCI functions, with 2 of them tied to function A, and the other 4 tied to 
function B. So, we could have 6 separate instances accessing functions on the 
same device.

Does this imply that Cyborg can't reprogram the FPGA at all?
[Mooney, Sean K] cyborg is intended to support fixed function acclerators also 
so it will not always be able to program the accelerator. In this case where an 
fpga is preprogramed with a multi function bitstream that is statically 
provisioned cyborge will not be able to reprogram the slot if any of the 
fuctions from that slot are already allocated to an instance. In this case it 
will have to treat it like a fixed function device and simply allocate a unused 
 vf  of the corret type if available.



In the current flow, the device type X is modeled as a resource class, so 
Placement will count how many of them are in use. A flavor for ‘RC 
device-type-X + function A’ will consume one instance of the RC device-type-X.  
But this is not right because this precludes other functions on the same device 
instance from getting used.

One way to solve this is to declare functions A and B as resource classes 
themselves and have the flavor request the function RC. Placement will then 
correctly count the function instances. However, there is still a problem: if 
the requested function A is not available, Placement will return an empty list 
of RPs, but we need some way to reprogram some device to create an instance of 
function A.

Clearly, nova is not going to be reprogramming devices with an instance of a 
particular function.

Cyborg might need to have a separate agent that listens to the nova 
notifications queue and upon seeing an event that indicates a failed build due 
to lack of resources, then Cyborg can try and reprogram a device and then try 
rebuilding the original request.

It was my understanding from that discussion that we intend to insert Cyborg 
into the spawn workflow for device configuration in the same way that we 
currently insert resources provided by Cinder and Neutron. So while Nova won't 
be reprogramming a device, it will be calling out to Cyborg to reprogram a 
device, and waiting while that happens.
My understanding is (and I concede some areas are a little hazy):
* The flavors says device type X with function Y
* Placement tells us everywhere with device type X
* A weigher orders these by devices which already have an available function Y 
(where is this metadata stored?)
* Nova schedules to host Z
* Nova host Z asks cyborg for a local function Y and blocks
  * Cyborg hopefully returns function Y which is already available
  * If not, Cyborg reprograms a function Y, then returns it
Can anybody correct me/fill in the gaps?
[Mooney, Sean K] that correlates closely to my recollection also. As for the 
metadata I think the weigher may need to call to cyborg to retrieve this as it 
will not be available in the host state object.
Matt


--
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG

Phone: +442070094448 (UK)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Limiting pip wheel builds for OpenStack clients

2018-01-24 Thread Mooney, Sean K


> -Original Message-
> From: Major Hayden [mailto:ma...@mhtx.net]
> Sent: Wednesday, January 24, 2018 8:03 PM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [openstack-ansible] Limiting pip wheel builds
> for OpenStack clients
> 
> Hey there,
> 
> I was spelunking into the slow wheel build problems we've been seeing
> in CentOS and I found that our wheel build process was spending 4-6
> minutes building cassandra-driver. The wheel build process usually
> takes 8-12 minutes, so half the time is being spent there.
> 
> More digging revealed that cassandra-driver is a dependency of python-
> monascaclient, which is a dependency of heat. The requirements.txt for
> heat drags in all of the clients:
> 
>   https://github.com/openstack/heat/blob/master/requirements.txt
[Mooney, Sean K] the python-monascaclient package is presumably an optional
Dependency of heat as are the other client.
E.g. I would hope that if you are using a heat with a cloud that does not have
Monasca it could still run without have python-monascaclient installed so
All of the clients should proably be move form the requirements.txt to the
test-requiremetns.txt and only the minimal required packages for heat to work
should be in requirements.txt.
> 
> We're already doing selective wheel builds and building only the wheels
> and venvs we need for the OpenStack services which are selected for
> deployment. Would it make sense to reduce the OpenStack client list for
> heat during the wheel/venv build? For example, if we're not deploying
> monasca, should we build/venv the python-monascaclient package (and its
> dependencies)?
> 
> I've opened a bug:
> 
>   https://bugs.launchpad.net/openstack-ansible/+bug/1745215
> 
> --
> Major Hayden
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kuryr][os-vif][nova] os-vif 1.8.0 breaks kuryr-kubernetes

2018-01-15 Thread Mooney, Sean K


> -Original Message-
> From: mdu...@redhat.com [mailto:mdu...@redhat.com]
> Sent: Monday, January 15, 2018 4:46 PM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [kuryr][os-vif][nova] os-vif 1.8.0 breaks
> kuryr-kubernetes
> 
> Hi,
> 
> os-vif commit [1] introduced a non-backward compatible change to the
> Subnet object - removal of ips field. Turns out kuryr-kubernetes were
> depending on that e.g. here [1] and we're now broken with os-vif 1.8.0.
> 
> kuryr-kubernetes is saving the VIF objects into the K8s resources
> annotations, so to keep backwards compatibility we need
> VIFBase.obj_make_compatible able to backport the data back into the
> Subnet object. Or be able to load the older data to the newer object.
> Anyone have an advice how we should proceed with that issue?
[Mooney, Sean K] I belive obj_make_compatible methods were in the original
Patch but they were removed as we did not know of any user of this field.
The IPs field in the subnet object Was a legacy hold over from when the
object was ported from nova-networks.
it is never used by nova when calling os-vif today hence the
change to align the data structure more closely with neutrons
where the fixed ips are an attribute of the port.
The change was made to to ensure no future users of os-vif consumed
the fixed ips from the subnet object but I guess kuryr-kubernetes had already 
done so.

Ideally we would migrate kuryr-kubernetes to consume fixed_ips form The vif 
object instead of the subnet
but if we can introduce a patch to os-vif to provide backwards compatibly
before the non-client lib freeze on thrusday we can include that in queens.


> 
> It would also be nice to setup a kuryr-kubernetes gate on the os-vif
> repo. If there are no objections to that I'd volunteer to submit a
> commit that adds it.
[Mooney, Sean K] I would be happy to see gate form all consumer of os-vif so go 
for it.
Related to this https://review.openstack.org/#/c/509107/4 is currently 
abandoned but I would
Also like revivew this change in Rocky. Neutron has supported multiple dhcp 
servers for
some time, Nova-networks only supported one hench why the dhcp_server field is 
currently singular.
Will this effect kuryr-kubernetes?
Are ye currently working around this issue in some other way? 

> 
> Thanks,
> Michal
> 
> [1] https://review.openstack.org/#/c/508498
> [2] https://github.com/openstack/kuryr-
> kubernetes/blob/18db6499432e6cab61059eb5abeeaad3ea40b6e4/kuryr_kubernet
> es/cni/binding/base.py#L64-L66
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][neutron-lib]Service function defintion files

2018-01-05 Thread Mooney, Sean K


From: CARVER, PAUL [mailto:pc2...@att.com]
Sent: Thursday, December 28, 2017 2:57 PM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [neutron][neutron-lib]Service function defintion 
files

It was a gating criteria for stadium status. The idea was that the for a 
stadium project the neutron team would have review authority over the API but 
wouldn't necessarily review or be overly familiar with the implementation.

A project that didn't have it's API definition in neutron-lib could do anything 
it wanted with its API and wouldn't be a neutron subproject because the neutron 
team wouldn't necessarily know anything at all about it.

For a neutron subproject there would at least theoretically be members of the 
neutron team who are familiar with the API and who ensure some sort of 
consistency across APIs of all neutron subprojects.

This is also a gating criteria for publishing API documentation on 
api.openstack.org vs publishing somewhere else. Again, the idea being that the 
neutron team would be able, at least in some sense, to "vouch for" the 
OpenStack networking APIs, but only for "official" neutron stadium subprojects.

Projects that don't meet the stadium criteria, including having api-def in 
neutron-lib, are "anything goes" and not part of neutron because no one from 
the neutron team is assumed to know anything about them. They may work just 
fine, it's just that you can't assume that anyone from neutron has anything to 
do with them or even knows what they do.
[Mooney, Sean K] as paul said above this has been a requirement for stadium 
membership for some time.
ocata was effectively the first release where this came in to effect 
https://github.com/openstack/neutron-specs/blob/master/specs/stadium/ocata.rst#how-reconcile-api-and-client-bindings
but it was started in newton 
https://github.com/openstack/neutron-specs/blob/master/specs/newton/neutron-stadium.rst
 with the concept of a neutron-api project which was folded into
neutron-lib when implemented instead of as an additional pure api project.




--
Paul Carver
V: 732.545.7377
C: 908.803.1656



 Original message 
From: Ian Wells <ijw.ubu...@cack.org.uk<mailto:ijw.ubu...@cack.org.uk>>
Date: 12/27/17 21:57 (GMT-05:00)
To: OpenStack Development Mailing List 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [neutron][neutron-lib]Service function defintion files

Hey,

Can someone explain how the API definition files for several service plugins 
ended up in neutron-lib?  I can see that they've been moved there from the 
plugins themselves (e.g. networking-bgpvpn has 
https://github.com/openstack/neutron-lib/commit/3d3ab8009cf435d946e206849e85d4bc9d149474#diff-11482323575c6bd25b742c3b6ba2bf17<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_neutron-2Dlib_commit_3d3ab8009cf435d946e206849e85d4bc9d149474-23diff-2D11482323575c6bd25b742c3b6ba2bf17=DwMFaQ=LFYZ-o9_HUMeMTSQicvjIg=HBNonG828PGilNRNwXAtdg=Ct8TKZR64-WFERXnsLkXWVRqR6D7hw31qKraVVIErz4=wPBSQzlYf76mAFHA9brzaY093kE7Vaek4pn8fFnjK7s=>)
 and that there's a stadium element to it judging by some earlier commits on 
the same directory, but I don't understand the reasoning why such service 
plugins wouldn't be self-contained - perhaps someone knows the history?

Thanks,
--
Ian.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [etsinfv][gap-04][blazar]: Clarification on the scope of the capacity query

2017-12-12 Thread Mooney, Sean K


> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Tuesday, December 12, 2017 3:02 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [etsinfv][gap-04][blazar]: Clarification
> on the scope of the capacity query
> 
> On 12/11/2017 12:41 PM, Csatari, Gergely (Nokia - HU/Budapest) wrote:
> > Hi Jay,
> >
> > Okay. Thanks for the clarification. Makes sense.
> >
> > Random-thinking:
> > Maybe the best would be to have a privilege level what covers the
> needs of MANO/NFVO, but still not full admin privileges. Do you think
> is this possible?
> 
> I think that the differences between the super-privileged user needs
> that a MANO system has and an administrative user are pretty small. The
> MANO system needs to be able to query and dynamically adjust resource
> inventories, move and grow/shrink workloads as needed and essentially
> act like the underlying hardware is wholly owned and operated by
> itself.
> 
> Really, the only privilege that the MANO system user *doesn't* need is
> the ability to create new users/projects in Keystone. Everything else
> is something that the MANO system user needs to be able to do. This is
> why I've called NFV (and particularly MANO/NFVO) a "purpose-built telco
> application" in the past. And I don't say that as some sort of put-down
> of NFV. I'm just pointing out the reality of things, that's all.
[Mooney, Sean K] not all mano system require admin privileges. ONAP/OpenO/Ecomp 
do,
As far as I am aware OSM does not strictly require admin privilege in all cases.
e.g. it is intended to be able query the a vim or an iaas system such as 
OpenStack
for preexisting flavors, and images and use them if they exist instead of always
needing to have the permissions to create them. If the cloud it is managing does
not have the features it requires and it does not have admin credentials to 
create them
it will be unable to fulfill the requested vnf instantiation. Similarly on the 
networking
side not all VNF will require provider network as such vlan network to 
function. Since the
networking-sfc api is non privilege  a wan optimizer or dpi engine can still be 
injected into
a neutron tenant network without admin rights. As such in principal a mano 
system can be a
standard unprivileged tenant, however ONAP/OpenO and ecomp do not support that 
use case
in their architecture. 
> 
> The ramification of this reality is that people deploying NFV using
> cloud infrastructure software like OpenStack really need to fully
> isolate the infrastructure environments that are used for VNFs (the
> things managed by the MANO/NFVO) from the infrastructure environments
> that are used for more "traditional" virtual private server or IT
> applications.
> 
> Best,
> -jay
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Removing internet access from unit test gates

2017-11-21 Thread Mooney, Sean K


> -Original Message-
> From: Jeremy Stanley [mailto:fu...@yuggoth.org]
> Sent: Tuesday, November 21, 2017 3:05 PM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] Removing internet access from unit test
> gates
> 
> On 2017-11-21 09:28:20 +0100 (+0100), Thomas Goirand wrote:
> [...]
> > The only way that I see going forward, is having internet access
> > removed from unit tests in the gate, or probably just the above
> > variables set.
> [...]
> 
> Historically, our projects hadn't done a great job of relegating their
> "unit test" jobs to only run unit tests, and had a number of what would
> be commonly considered functional tests mixed in. This has improved in
> recent years as many of those projects have created separate functional
> test jobs and are able to simplify their unit test jobs accordingly, so
> this may be more feasible now than it was in the past.
> 
> Removing network access from the machines running these jobs won't
> work, of course, because our job scheduling and execution service needs
> to reach them over the Internet to start jobs, monitor progress and
> collect results. As you noted, faking Python out with envvars pointing
> it at nonexistent HTTP proxies might help at least where tests attempt
> to make HTTP(S) connections to remote systems.
> The Web is not all there is to the Internet however, so this wouldn't
> do much to prevent use of remote DNS, NTP, SMTP or other
> non-HTTP(S) protocols.
> 
> The biggest wrinkle I see in your "proxy" idea is that most of our
> Python-based projects run their unit tests with tox, and it will use
> pip to install project and job dependencies via HTTPS prior to starting
> the test runner. As such, any proxy envvar setting would need to happen
> within the scope of tox itself so that it will be able to set up the
> virtualenv prior to configuring the proxy vars for the ensuing tests.
> It might be easiest for you to work out the tox.ini modification on one
> project (it'll be self-testing at least) and then once the pilot can be
> shown working the conversation with the community becomes a little
> easier.
[Mooney, Sean K] I may be over simplifying here but our unit tests are still 
executed by
Zuul in vms provided by nodepool. Could we simply take advantage of openstack 
and
use security groups to to block egress traffic from the vm except that required 
to upload the logs?
e.g. don't mess with tox or proxyies within the vms and insteand do this 
externally via neutron.
This would require the cloud provider to expose neutron however which may be an 
issue for Rackspace but
It its only for unit test which are relatively short lived vs tempest jobs 
perhaps the other providers would
Still have enough capacity?
> --
> Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-vif] [nova] Changes to os-vif cores

2017-11-01 Thread Mooney, Sean K
+1

> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Wednesday, October 25, 2017 3:45 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [os-vif] [nova] Changes to os-vif cores
> 
> +1
> 
> On 10/24/2017 10:32 AM, Stephen Finucane wrote:
> > Hey,
> >
> > I'm not actually sure what the protocol is for adding/removing cores
> > to a library project without a PTL, so I'm just going to put this out
> > there: I'd like to propose the following changes to the os-vif core
> team.
> >
> > - Add 'nova-core'
> >
> >os-vif makes extensive use of objects and we've had a few hiccups
> around
> >versionings and the likes recently [1][2]. I'd the expertise of
> some of the
> >other nova cores here as we roll this out to projects other than
> nova, and I
> >trust those not interested/knowledgeable in this area to stay away
> > :)
[Mooney, Sean K] in the future as we start integrating with neutron we may want 
to also extend this to neutron-cores with the same understanding that those 
not interested/knowledgeable in this area continue to focus on neutron.

I also think it continues to be the current os-vif teams role to ensure we do 
not break
Our customers and understand the interaction with both nova and neutron of the
Changes we are making and/or reviewing. That is to say I don’t want the fact 
that nova-cores
or neutron-cores is added to imply that only they should make sure os-vif works 
with nova/neutron.
More succinctly this should change should not be a burden on the nova and 
neutron.


> >
> > - Remove Russell Bryant, Maxime Leroy
> >
> >These folks haven't been active on os-vif  [3][4] for a long time
> and I think
> >they can be safely removed.
> >
> > To the existing core team members, please respond with a yay/nay and
> > we'll wait a week before doing anything.
> >
> > Cheers,
> > Stephen
> >
> > [1] https://review.openstack.org/#/c/508498/
> > [2] https://review.openstack.org/#/c/509107/
> > [3]
> >
> https://review.openstack.org/#/q/reviewedby:%22Russell+Bryant+%253Crbr
> > yant% 2540redhat.com%253E%22+project:openstack/os-vif
> > [4]
> >
> https://review.openstack.org/#/q/reviewedby:%22Maxime+Leroy+%253Cmaxim
> > e.ler oy%25406wind.com%253E%22+project:openstack/os-vif
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] ironic and traits

2017-10-23 Thread Mooney, Sean K


From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Monday, October 23, 2017 12:20 PM
To: OpenStack Development Mailing List <openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [ironic] ironic and traits

Writing from my phone... May I ask that before you proceed with any plan that 
uses traits for state information that we have a hangout or videoconference to 
discuss this? Unfortunately today and tomorrow I'm not able to do a hangout but 
I can do one on Wednesday any time of the day.

[Mooney, Sean K] on the uefi boot topic I did bring up at the ptg that we 
wanted to standardizes tratis for “verified boot”
that included a trait for uefi secure boot enabled and to indicated a hardware 
root of trust, e.g. intel boot guard or similar
we distinctly wanted to be able to tag nova compute hosts with those new traits 
so we could require that vms that request
a host with uefi secure boot enabled and a hardware root of trust are scheduled 
only to those nodes.

There are many other examples that effect both vms and bare metal such as, 
ecc/interleaved memory, cluster on die,
l3 cache code and data prioritization, vt-d/vt-c, HPET, Hyper threading, power 
states … all of these feature may be present on the platform
but I also need to know if they are turned on. Ruling out state in traits means 
all of this logic will eventually get pushed to scheduler filters
which will be suboptimal long term as more state is tracked. Software defined 
infrastructure may be the future but hardware defined software
is sadly the present…

I do however think there should be a sperateion between asking for a host that 
provides x with a trait and  asking for x to be configure via
A trait. The trait secure_boot_enabled should never result in the feature being 
enabled It should just find a host with it on. If you want
To request it to be turned on you would request a host with secure_boot_capable 
as a trait and have a flavor extra spec or image property to request
Ironic to enabled it.  these are two very different request and should not be 
treated the same.


Lemme know!
-jay

On Oct 23, 2017 5:01 AM, "Dmitry Tantsur" 
<dtant...@redhat.com<mailto:dtant...@redhat.com>> wrote:
Hi Jay!
I appreciate your comments, but I think you're approaching the problem from 
purely VM point of view. Things simply don't work the same way in bare metal, 
at least not if we want to provide the same user experience.

On Sun, Oct 22, 2017 at 2:25 PM, Jay Pipes 
<jaypi...@gmail.com<mailto:jaypi...@gmail.com>> wrote:
Sorry for delay, took a week off before starting a new job. Comments inline.

On 10/16/2017 12:24 PM, Dmitry Tantsur wrote:
Hi all,

I promised John to dump my thoughts on traits to the ML, so here we go :)

I see two roles of traits (or kinds of traits) for bare metal:
1. traits that say what the node can do already (e.g. "the node is
doing UEFI boot")
2. traits that say what the node can be *configured* to do (e.g. "the node can
boot in UEFI mode")

There's only one role for traits. #2 above. #1 is state information. Traits are 
not for state information. Traits are only for communicating capabilities of a 
resource provider (baremetal node).

These are not different, that's what I'm talking about here. No users care 
about the difference between "this node was put in UEFI mode by an operator in 
advance", "this node was put in UEFI mode by an ironic driver on demand" and 
"this node is always in UEFI mode, because it's AARCH64 and it does not have 
BIOS". These situation produce the same result (the node is booted in UEFI 
mode), and thus it's up to ironic to hide this difference.

My suggestion with traits is one way to do it, I'm not sure what you suggest 
though.


For example, let's say we add the following to the os-traits library [1]

* STORAGE_RAID_0
* STORAGE_RAID_1
* STORAGE_RAID_5
* STORAGE_RAID_6
* STORAGE_RAID_10

The Ironic administrator would add all RAID-related traits to the baremetal 
nodes that had the *capability* of supporting that particular RAID setup [2]

When provisioned, the baremetal node would either have RAID configured in a 
certain level or not configured at all.

A very important note: the Placement API and Nova scheduler (or future Ironic 
scheduler) doesn't care about this. At all. I know it sounds like I'm being 
callous, but I'm not. Placement and scheduling doesn't care about the state of 
things. It only cares about the capabilities of target destinations. That's it.

Yes, because VMs always start with a clean state, and hypervisor is there to 
ensure that. We don't have this luxury in ironic :) E.g. our SNMP driver is not 
even aware of boot modes (or RAID, or BIOS configuration), which does not mean 
that a node using it cannot be in UEFI mode (have a RAID or BIOS 
pre-configured, etc, etc).


This seems confusing, but it's actually very useful. Say, I have a flavor that
requests UEFI boot

[openstack-dev] [nova][neutron] Use neutron's new port binding API

2017-10-19 Thread Mooney, Sean K
Hi matt
You not only currently so I taught I would respond to your question regarding 
the workflow via email.
http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2017-10-18.log.html#t2017-10-18T20:29:02

mriedem 1. conductor asks scheduler for a host20:29
mriedem 2. scheduler filter looks for ports with a qos policy and 
if found, gets allocatoin candidates for hosts that have a nested bw provider 
20:29
mriedem 3. scheduler returns host to conductor  20:29
mriedem 4. conductor binds the port to the host  20:30
mriedem 5. the bound port profile has some allocation juju that 
nova proxies to placement as an allocation request for the port on the bw 
provider   20:30
mriedem 6. conductor sends to compute to build the instance   
20:30
mriedem 7. compute activates the bound port  20:30
mriedem 8. compute plugs vifs 20:30
mriedem 9. profit?!


So my ideal workflow would be


1.   conductor calls allocate_for_instance 
https://github.com/openstack/nova/blob/1b45b530448c45598b62e783bdd567480a8eb433/nova/network/neutronv2/api.py#L814
in schedule_and_build_instances 
https://github.com/openstack/nova/blob/fce56ce8c04b20174cd89dfbc2c06f0068324b55/nova/conductor/manager.py#L1002
Before calling self._schedule_instances. This get or creates all neutron ports 
for the instance before we call the scheduler.

2.   conductor asks scheduler for a host by calling 
self._schedule_instances passing in the network_info object.

3.   scheduler extract placement requests form network_info object and adds 
them to the list it send to placement.

4.   Scheduler applies standard filters to placement candidates.

5.   scheduler returns host  after weighing to conductor.

6.   conductor binds the port to the host.

a.   if it fails early retry on next host in candidate set.

b.  Continue until port binding succeeds, retry limit is reached, or 
candidate are exhausted

7.   The conductor creates allocations for the host against all resource 
providers.

a.   When the port is bound neutron will populate the resource request for 
bandwidth, with the neutron agent uuid which will be the resource provider uuid 
to allocate from.

8.   conductor sends to compute to build the instance passing the 
allocations

9.   compute plugs vifs

10.   compute activates the bound port setting the allocation uuid on the port 
for all resource classes request by neutron

11.   excess of income over expenditure? :)

The important thing to note is nova recives all request for network resouces 
form neutron in the port objects created at step 1
Nova learns the backend resource provider for neutron at step 6 before it makes 
allocations
Nova then passes the allocation that were made back to neutron when it activate 
the port.

We have nova make the allocation for all resources to prevent any races between 
the conductor and neutron when updating the same nested resource provider tree.
(this was jays concern)
Neutron will create the inventories for bandwith but nova will allocate form 
the them.
The intent is for nova to not need to know what the resource it is claim are, 
but instead be able to accept a set of additional resocues to claim form 
neutron in a generic
Workflow which we can hopefully reuse for other project like cinder or cyborg 
in the future.

Regards sean.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] vGPUs support for Nova - Implementation

2017-10-02 Thread Mooney, Sean K


> -Original Message-
> From: Dan Smith [mailto:d...@danplanet.com]
> Sent: Monday, October 2, 2017 3:53 PM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] vGPUs support for Nova - Implementation
> 
> >> I also think there is value in exposing vGPU in a generic way,
> irrespective of the underlying implementation (whether it is DEMU,
> mdev, SR-IOV or whatever approach Hyper-V/VMWare use).
> >
> > That is a big ask. To start with, all GPUs are not created equal, and
> > various vGPU functionality as designed by the GPU vendors is not
> > consistent, never mind the quirks added between different hypervisor
> > implementations. So I feel like trying to expose this in a generic
> > manner is, at least asking for problems, and more likely bound for
> > failure.
> 
> I feel the opposite. IMHO, Nova’s role in life is not to expose all the
> quirks of the underlying platform, but rather to provide a useful
> abstraction on top of those things. In spite of them.
[Mooney, Sean K] I have to agree with dan here.
vGPUs are a great example of where nova can add value by abstracting
the hypervisor specifics and provide a abstract api to allow requesting
vGPUS without having to encode the semantics of that api provide by the
hypervisor or hardware vendor in what we expose to the tenant.
> 
> > Nova already exposes plenty of hypervisor-specific functionality (or
> > functionality only implemented for one hypervisor), and that's fine.
> 
> And those bits of functionality are some of the most problematic we
> have. Among other reasons, they make it difficult for us to expose
> Thing 2.0, when we’ve encoded Thing 1.0 into our API so rigidly. This
> happens even within one virt driver where Thing 2.0 is significantly
> different than Thing 1.0.
> 
> The vGPU stuff seems well-suited for the generic modeling work that
> we’ve spent the last few years working on, and is a perfect example of
> an area where we can avoid piling on more debt to a not-abstract-enough
> “model” and move forward with the new one. That’s certainly my
> preference, and I think it’s actually less work than the debt-ridden
> way.
> 
> -—Dan
[Mooney, Sean K] I also agree that its likely less work to start fresh with 
the correct generic solution now, then try to adapt the pci passthough code
we have today to support vGPUs with out breaking the current
sriov and passthrough support. how vGPUs are virtualized is GPU vendor specific
so even within a single host you may neen to support multiple methods 
(sriov/mdev...) in a single virt dirver. For example a cloud/host with both amd 
and nvidia
Gpus which uses Libvirt would have to support generating the correct xml for 
both solutions.
> 
> 
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-10-02 Thread Mooney, Sean K
This also broke the legacy-tempest-dsvm-nova-os-vif gate job
http://logs.openstack.org/98/508498/1/check/legacy-tempest-dsvm-nova-os-vif/8fdf055/logs/devstacklog.txt.gz#_2017-09-29_14_15_41_961

> -Original Message-
> From: Mehdi Abaakouk [mailto:sil...@sileht.net]
> Sent: Monday, October 2, 2017 2:52 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [devstack] zuulv3 gate status;
> LIBS_FROM_GIT failures
> 
> Looks like the LIBS_FROM_GIT workarounds have landed, but I still have
> some issue on telemetry integration jobs:
> 
>   http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-
> integration-ceilometer/e3bd35d/logs/devstacklog.txt.gz
> 
> On Fri, Sep 29, 2017 at 10:57:34AM +0200, Mehdi Abaakouk wrote:
> >On Fri, Sep 29, 2017 at 08:16:38AM +, Jens Harbott wrote:
> >>2017-09-29 7:44 GMT+00:00 Mehdi Abaakouk :
> >>>We also have our legacy-telemetry-dsvm-integration-ceilometer
> broken:
> >>>
> >>>http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-
> int
> >>>egration-ceilometer/e185ae1/logs/devstack-gate-setup-workspace-
> new.tx
> >>>t
> >>
> >>That looks similar to what Ian fixed in [1], seems like your job
> needs
> >>a corresponding patch.
> >
> >Thanks, I have proposed the same kind of patch for telemetry [1]
> >
> >[1] https://review.openstack.org/508448
> >
> >--
> >Mehdi Abaakouk
> >mail: sil...@sileht.net
> >irc: sileht
> 
> --
> Mehdi Abaakouk
> mail: sil...@sileht.net
> irc: sileht
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] vGPUs support for Nova

2017-09-26 Thread Mooney, Sean K


> -Original Message-
> From: Sahid Orentino Ferdjaoui [mailto:sferd...@redhat.com]
> Sent: Tuesday, September 26, 2017 1:46 PM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] vGPUs support for Nova
> 
> On Mon, Sep 25, 2017 at 04:59:04PM +, Jianghua Wang wrote:
> > Sahid,
> >
> > Just share some background. XenServer doesn't expose vGPUs as mdev or
> > pci devices.
> 
> That does not make any sense. There is physical device (PCI) which
> provides functions (vGPUs). These functions are exposed through mdev
> framework. What you need is the mdev UUID related to a specific vGPU
> and I'm sure that XenServer is going to expose it. Something which
> XenServer may not expose is the NUMA node where the physical device is
> plugged on but in such situation you could still use sysfs.
[Mooney, Sean K] this is implementation specific. Amd support virtualizing
There gpu using sriov http://www.amd.com/Documents/Multiuser-GPU-White-Paper.pdf
In that case you can use the existing pci pass-through support without any 
modification.
For intel and nvidia gpus we need speficic hypervisor support as the device 
partitioning
Is done in the host gpu driver rather than via sirov. There are two level of 
abstraction
That we must keep separate. 1 how does the hardware support configuration and 
enumeration
Of the virutalised resources (amd in hardware via sriov, intel/nvidia via 
driver/software manager). 
2 how does the hypervisor report the vgpus to openstack and other clients.

In the amd case I would not expect any hypervisor to have mdevs associated with 
The sriov vf as that is not the virtualization model they have implemented.
In the intel gvt case yes you will have mdevs but the virtual gpus are not 
Represented on the pci bus so we should not model them as pci deveices.

Some more comments below.
> 
> > I proposed a spec about one year ago to make fake pci devices so that
> > we can use the existing PCI mechanism to cover vGPUs. But that's not
> a
> > good design and got strongly objection. After that, we switched to
> use
> > the resource providers by following the advice from the core team.
> >
> > Regards,
> > Jianghua
> >
> > -Original Message-
> > From: Sahid Orentino Ferdjaoui [mailto:sferd...@redhat.com]
> > Sent: Monday, September 25, 2017 11:01 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > <openstack-dev@lists.openstack.org>
> > Subject: Re: [openstack-dev] vGPUs support for Nova
> >
> > On Mon, Sep 25, 2017 at 09:29:25AM -0500, Matt Riedemann wrote:
> > > On 9/25/2017 5:40 AM, Jay Pipes wrote:
> > > > On 09/25/2017 05:39 AM, Sahid Orentino Ferdjaoui wrote:
> > > > > There is a desire to expose the vGPUs resources on top of
> > > > > Resource Provider which is probably the path we should be going
> > > > > in the long term. I was not there for the last PTG and you
> > > > > probably already made a decision about moving in that direction
> > > > > anyway. My personal feeling is that it is premature.
> > > > >
> > > > > The nested Resource Provider work is not yet feature-complete
> > > > > and requires more reviewer attention. If we continue in the
> > > > > direction of Resource Provider, it will need at least 2 more
> > > > > releases to expose the vGPUs feature and that without the
> > > > > support of NUMA, and with the feeling of pushing something
> which is not stable/production-ready.
[Mooney, Sean K] Not all gpus have numam affinity. Intel integrated gpus do 
not. they have
Dedicated edram on the processor die so there memory accesses never leave
The processor package sot they do not have numa affinity. I would assume the
Same is true for amd integrated gpus so only descreet gpus will have numa 
affinity.
> > > > >
> > > > > It's seems safer to first have the Resource Provider work well
> > > > > finalized/stabilized to be production-ready. Then on top of
> > > > > something stable we could start to migrate our current virt
> > > > > specific features like NUMA, CPU Pinning, Huge Pages and
> finally PCI devices.
> > > > >
> > > > > I'm talking about PCI devices in general because I think we
> > > > > should implement the vGPU on top of our /pci framework which is
> > > > > production ready and provides the support of NUMA.
> > > > >
> > > > > The hardware vendors building their drivers using mdev and the
This is vendor specifi

Re: [openstack-dev] [os-vif] [passthrough] [VifHostDevice]

2017-09-21 Thread Mooney, Sean K


> -Original Message-
> From: pranab boruah [mailto:pranabjyotibor...@gmail.com]
> Sent: Thursday, September 21, 2017 5:12 PM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [os-vif] [passthrough] [VifHostDevice]
> 
> Hi,
> We have a SRIOV capable NIC that supports OVS offload and Switchdev.
> We are trying to test the VifHostDevice model on a Pike cluster. We are
> running into issues. Here are the config options that we have
> used:
> 
> 1. In Neutron conf file: mechanism driver = ovn 
[Mooney, Sean K] looking at the networking-ovn ml2 dirver vnic type direct is 
not supported
https://github.com/openstack/networking-ovn/blob/f5fe5e3c623a2a65ee78ec28b053d8e72060c13d/networking_ovn/ml2/mech_driver.py#L112
the hardware offload support for melonox nics is only supported with the 
openvswitch or odl ml2 dirvers
netronome smartnics require the use of the agilio ovs ml2 dirver which supports
dirct and virtio forwarder mode
https://github.com/Netronome/agilio-ovs-openstack-plugin/blob/master/networking_netronome/plugins/ml2/drivers/agilio_ovs/mech_driver/mech_agilio_ovs.py#L46-L47

if you wish to use ovn you will need to modify the ovn ml2 dirver to add 
vnic_type direct to the supported vnictypes.

>2. In Nova conf file:
> passthrough_whitelist = {"address":":02:00.*"} 3. Created a port as
> vnic_type=direct and launched instances.
> It gives the following error - Nova error : "No net device was found
> for VF"
> Am I missing some other config options?
[Mooney, Sean K] no but as I mentioned above ovn is not currently supported.
I belive you should have a log message in the neutron server log also as when 
neutron calls
The the networking-ovn ml2 driver here
https://github.com/openstack/neutron/blob/433d5a03534c4f30fdf3b864d11dea527e9b6f91/neutron/plugins/ml2/managers.py#L782
we simply retrun on line 502 here after logging.
https://github.com/openstack/networking-ovn/blob/f5fe5e3c623a2a65ee78ec28b053d8e72060c13d/networking_ovn/ml2/mech_driver.py#L502
if you only have the ovn ml2 driver enabled you should set the prot with 
vif_type_binding_failed howver if you have
the sriovnicagent also enable it may be masking the isses as 
https://github.com/openstack/neutron/blob/433d5a03534c4f30fdf3b864d11dea527e9b6f91/neutron/plugins/ml2/managers.py#L776
will continue to try the other dirvers.

Assuming ovn in the only enabled mech driver i belive this should result in the 
vif_type being set to VIF_TYPE_BINDING_FAILED as
https://github.com/openstack/neutron/blob/433d5a03534c4f30fdf3b864d11dea527e9b6f91/neutron/plugins/ml2/managers.py#L748
will not return anyting so we should execute 
https://github.com/openstack/neutron/blob/433d5a03534c4f30fdf3b864d11dea527e9b6f91/neutron/plugins/ml2/managers.py#L750-L757
you should be able to configm this by doing a port show and/or checking the 
neutron server log.

> 
> Also, how can I check the logs that are related to the os-vif library?
[Mooney, Sean K] the logs are present in the n-cpu log as os-vif executes with 
the nova compute agent.
> 
> Let me know if further details are required.
> 
> TIA,
> Pranab
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg][nova][neutron] modelling network capabilities and capacity in placement and nova neutron port binding negociation.

2017-09-12 Thread Mooney, Sean K
I have not tried to book a room yet but I think if we can find a slot later 
today,
It would be ideal as I would like to do this before the nova-neutron
Session on Thursday morning. later also works but I do not want to
Overlap with the other nova sessions if it can be avoided.

I have created a blank ehterpad to capture discussion points.
https://etherpad.openstack.org/p/nova-neuton-portbinding-placement-ptg-queens

I will try to document some of my taught on this area there later today.

Jay glad to here you will be able to make it too the ptg in the end.
It will be good to have your input on this.

> -Original Message-
> From: Eric Fried [mailto:openst...@fried.cc]
> Sent: Monday, September 11, 2017 6:36 PM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [ptg][nova][neutron] modelling network
> capabilities and capacity in placement and nova neutron port binding
> negociation.
> 
> Yup, I definitely want to be involved in this too.  Please keep me
> posted.
> 
>   efried
> 
> On 09/11/2017 11:12 AM, Jay Pipes wrote:
> > I'm interested in this. I get in to Denver this evening so if we can
> > do this session tomorrow or later, that would be super.
> >
> > Best,
> > -jay
> >
> > On 09/11/2017 01:11 PM, Mooney, Sean K wrote:
> >> Hi everyone,
> >>
> >> I’m interested in set up a white boarding session at the ptg to
> >> discuss
> >>
> >> How to model network backend in placement and use that info as part
> >> of scheduling
> >>
> >> This work would also intersect on the nova neutron port binding
> >> negotiation
> >>
> >> Work that is also in flight so I think there is merit in combining
> >> both topic into one
> >>
> >> Session.
> >>
> >> For several release we have been discussing a negotiation protocol
> >> that would
> >>
> >> Allow nova/compute services to tell neutron what virtual and
> physical
> >> interfaces
> >>
> >> a hypervisor can support and then allow neutron to select from that
> >> set the most appriote
> >>
> >> vif type based on the capabilities of the network backend deployed
> by
> >> the host.
> >>
> >> Extending that concept with the capabilities provided by placement
> >> and trait
> >>
> >> Will enable us to model the network capablites of a specific network
> >> backend
> >>
> >> In an scheduler friendly way without nova needing to understand
> >> networking.
> >>
> >> To that end  if people are interested in  having a while boarding
> >> session to dig
> >>
> >> Into this let me know.
> >>
> >> Regards
> >>
> >> Seán
> >>
> >> --
> >> Intel Shannon Limited
> >> Registered in Ireland
> >> Registered Office: Collinstown Industrial Park, Leixlip, County
> >> Kildare Registered Number: 308263 Business address: Dromore House,
> >> East Park, Shannon, Co. Clare
> >>
> >>
> >>
> >>
> _
> >> _
> >>
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ptg][nova][neutron] modelling network capabilities and capacity in placement and nova neutron port binding negociation.

2017-09-11 Thread Mooney, Sean K
Hi everyone,

I'm interested in set up a white boarding session at the ptg to discuss
How to model network backend in placement and use that info as part of 
scheduling

This work would also intersect on the nova neutron port binding  negotiation
Work that is also in flight so I think there is merit in combining both topic 
into one
Session.

For several release we have been discussing a negotiation protocol that would
Allow nova/compute services to tell neutron what virtual and physical interfaces
a hypervisor can support and then allow neutron to select from that set the 
most appriote
vif type based on the capabilities of the network backend deployed by the host.

Extending that concept with the capabilities provided by placement and trait
Will enable us to model the network capablites of a specific network backend
In an scheduler friendly way without nova needing to understand networking.

To that end  if people are interested in  having a while boarding session to dig
Into this let me know.

Regards
Seán
--
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [neutron] Adding neutron VIF NUMA locality support

2017-09-07 Thread Mooney, Sean K


> -Original Message-
> From: Stephen Finucane [mailto:sfinu...@redhat.com]
> Sent: Thursday, September 7, 2017 5:42 PM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Cc: Jakub Libosvar <jlibo...@redhat.com>; Karthik Sundaravel
> <ksund...@redhat.com>; Mooney, Sean K <sean.k.moo...@intel.com>
> Subject: [nova] [neutron] Adding neutron VIF NUMA locality support
> 
> Hey,
> 
> NUMA locality matters as much for NICs used e.g for Open vSwitch as for
> SR-IOV devices. At the moment, nova support NUMA affinity for PCI
> passthrough devices and SR-IOV devices, but it makes no attempt to do
> the same for other NICs. In the name of NFV enablement, we should
> probably close this gap.
[Mooney, Sean K] I like this idea in general, that said in ovs-dpdk we modified
ovs to schedule the vhost-user port to be processed on a pmd that is on the same
Numa node as the vm and reallocate the vhsot user port memory where possible
To also have the same affinity. 
> 
> I have some ideas around how this could work, but they're fuzzy enough
> and involve exchanging os-vif objects between nova and neutron. This is
> probably the most difficult path as we've been trying to get os-vif
> objects over the nova-neutron wire for a while now, to no success.
[Mooney, Sean K] actually we have so poc code you should proably review
This topic.
https://blueprints.launchpad.net/os-vif/+spec/vif-port-profile
https://review.openstack.org/#/c/490829/ 
https://review.openstack.org/#/c/490819/ 
https://review.openstack.org/#/c/441590/
the first patch of the neutron side poc should be up before the ptg.

> 
> Anyone else keen on such a feature? Given that there are a significant
> amount of people from nova, neutron, and general NFV backgrounds at the
> PTG next week, we have a very good opportunity to talk about this
> (either in the nova- neutron sync, if that's not already full, or in
> some hallway somewhere).
[Mooney, Sean K] in terms of basic numa affinity this is not as important
With ovs-dpdk because we make best effort to fix it in ovs this is less pressing
Then it used to be. It is still important for other backbends but we need
Also have a mechanism to control numa affinity policy like 
 https://review.openstack.org/#/c/361140/ to not break existing deployments.

I have some taught about modeling network backbends
in placement and also passing traits requests for neutron that this would dove
tail with so would love to talk to anyone who is interested in this.
By modeling ovs and other network backend in placement and combining that
With traits and the nova-neutron negotiation protocol we support several
Advance usescase.

By the way  ovs-dpdk allow you to specify vhost-port rx/tx queue mapping 
to pmd which could give a nice performance boost if done correctly. It
might be worth extending os-vif to do that in the future though this could
equally be handeled by the neutron ovs l2 agent.
> 
> At this point in the day, this is probably very much a Rocky feature,
> but we could definitely put in whatever groundwork is necessary this
> cycle to make the work in Rocky as easy possible.
[Mooney, Sean K] I'm hoping we can get the nova neutron negotiation done in 
queens.
> 
> Cheers,
> Stephen
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduling] Can VM placement consider the VM network traffic need?

2017-09-05 Thread Mooney, Sean K
Interesting timeing
Would love to talk about this at the ptg.
Comments inline.
Regards
sean

> -Original Message-
> From: Balazs Gibizer [mailto:balazs.gibi...@ericsson.com]
> Sent: Tuesday, September 5, 2017 8:23 AM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Cc: Mooney, Sean K <sean.k.moo...@intel.com>; mosh...@mellanox.com
> Subject: Re: [openstack-dev] [nova][scheduling] Can VM placement
> consider the VM network traffic need?
> 
> On Mon, Sep 4, 2017 at 9:11 PM, Jay Pipes <jaypi...@gmail.com> wrote:
> > On 09/01/2017 04:42 AM, Rua, Philippe (Nokia - FI/Espoo) wrote:
> > > Will it be possible to include network bandwidth as a resource in
> > Nova scheduling, for VM placement decision?
> >
> > Yes.
> >
> > See here for a related Neutron spec that mentions Placement:
> > https://review.openstack.org/#/c/396297/7/specs/pike/strict-minimum-
> ba
> > ndwidth-support.rst
> >
> > > Context: in telecommunication applications, the network traffic is
> > an important dimension of resource usage. For example, it is often
> > important to distribute "bandwidth-greedy" VMs to different compute
> > nodes. There were some earlier discussions on this topic, but I could
> > not find a concrete outcome. [1][2][3]
> > >
> > > After some reading, I wonder whether the Custom resource classes
> > can provide a generic mechanism? [4][5][6]
> >
> > No :) Custom resource classes are antithetical to generic/standard
> > mechanisms.
> >
> > We want to add two *standard* resource classes, one called
> > NET_INGRESS_BYTES_SEC and another called NET_EGRESS_BYTES_SEC which
> > would represent the total bandwidth in bytes per second the for
> > corresponding traffic directions.
> 
> While I agree that the end goal is to have standard resource classes
> for bandwidth I think custom resource classes are generic enough to
> model bandwidth resource. If you want to play with the bandwidth based
> scheduling idea based on Pike then custom resource classes are
> available as a tool for a proof of concept.
[Mooney, Sean K] 
Form a queens perspective Rodolfo is currently working creating a spec
To introduce a standard bandwidth resource class and resource provider.
He has opened the blueprint to track this here:
https://blueprints.launchpad.net/nova/+spec/bandwidth-resource-provider
currently the scope we are proposing our work to cover is end to end
minimum bandwidth guarantee for sriov interfaces.in this case the bandwidth
resource provider will be a child of the PF. This could be extended
to vSwitches also but in the linux bridge and ovs case neither can support
multi-tenant minimum bandwidth gurrentess at present so from a nova perspective
while we can make sure we do not over subscribe on bandwidth for ovs, neutron
cannot enforce the minimum bandwidth allocation on the vswitch. Hardware 
offloaded
ovs may be able to provide a minimum bandwidth guarantee in the future as might 
vpp
> 
> >
> >
> > What would be the resource provider, though? There are at least two
> > potential answers here:
> >
> > 1) A network interface controller on the compute host
> >
> > In this case, the NIC on the host would be a child provider of the
> > compute host resource provider. It would have an inventory record of
> > resource class NET_INGRESS_BYTES_SEC with a total value representing
> > the entire bandwidth of the host NIC. Instances would consume some
> > amount of NET_INGRESS_BYTES_SEC corresponding to *either* the Nova
> > flavor (if the resources:NET_INGRESS_BYTES_SEC extra-spec is set)
> *or*
> > to the sum of consumed bandwidth amounts from the port profile of any
> > ports specified when launching the instance (and thus would be part
> of
> > the pci device request collection attached to the build request).
> >
> > 2) A "network slice" of a network interface controller on the compute
> > host
> >
> > In this case, assume that the NIC on the compute host has had its
> > total bandwidth constrained via traffic control so that 50% of its
> > available ingress bandwidth is allocated to network A and 50% is
> > allocated to network B.
> >
> > There would be multiple resources providers, each with an inventory
> > record of resource class NET_INGRESS_BYTES_SEC with a total value of
> > 1/2
> > the total NIC bandwidth. Both of these resource providers would be
> > child providers of the compute host resource provider. One of these
> > child resource providers will be decorated with the trait
> > "CUSTOM_NETWORK_A"
> &g

Re: [openstack-dev] [nova] [placement] [api] cache headers in placement service

2017-08-21 Thread Mooney, Sean K


> -Original Message-
> From: Chris Dent [mailto:cdent...@anticdent.org]
> Sent: Monday, August 21, 2017 10:44 AM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [nova] [placement] [api] cache headers in
> placement service
> 
> On Mon, 21 Aug 2017, Jay Pipes wrote:
> > On 08/21/2017 04:59 AM, Chris Dent wrote:
> > We do have cache validation on the server side for resource classes.
> > Any time a resource class is added or deleted, we call
> > _RC_CACHE.clear(). Couldn't we add a single attribute to the
> > ResourceClassCache that returns the last time the cache was reset?
> 
> That's server side cache, of which the client side (or proxy side) has
> no visibility. If we had etags, and were caching etag to resource pairs
> when we sent out responses, we could then have a conditional GET
> handler which checked etags, returning 304 on a cache hit.
> At _RC_CACHE changes we could flush the etag cache.
[Mooney, Sean K]  I agree this is likely needed if caching is used. One of the 
changes
Intel would like to make is to transition the attestation server integration for
Trusted boot with our cloud integrity technologies to use traits on the compute 
node
Instead of a custom filter to attest that the server is trusted. In that case we
We would want to ensure that if we add or remove a trait for resource provider 
that
The cache is invalidated. So we would have to invalidate the etag or updated 
everytime
We update the tratis.
> 
> > But meh, you're right that the simpler solution is just to not do
> HTTP
> > caching.
> 
> 'xactly
> 
> > But then again, if the default behaviour of HTTP is to never cache
> > anything unless some cache-related headers are present [1] and you
> > *don't* want proxies to cache any placement API information, why are
> > we changing anything at all anyway? If we left it alone (and continue
> > not sending Cache-Control headers for anything), the same exact
> result would be achieved, no?
> 
> Essentially so we can put last-modified headers on things, which in RFC
> speak we SHOULD do. And if we do that then we SHOULD make sure no
> caching happens.
> 
> Also it seems like last-modified headers is a nice-to-have for that
> "uknown client" I spoke up in the first message.
> 
> But as you correctly identify the immediate practical value to nova is
> pretty small, which is one of the reasons I was looking for the
> lightest-weight implementation.
> 
> --
> Chris Dent  (⊙_⊙') https://anticdent.org/
> freenode: cdent tw: @anticdent
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][infra] Functional job failure rate at 100%

2017-08-10 Thread Mooney, Sean K

From: Miguel Angel Ajo Pelayo [mailto:majop...@redhat.com]
Sent: Thursday, August 10, 2017 8:55 AM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [neutron][infra] Functional job failure rate at 
100%

Good (amazing) job folks. :)

El 10 ago. 2017 9:43, "Thierry Carrez" 
<thie...@openstack.org<mailto:thie...@openstack.org>> escribió:
Oh, that's good for us. Should still be fixed, if only so that we can
test properly :)

Kevin Benton wrote:
> This is just the code simulating the conntrack entries that would be
> created by real traffic in a production system, right?
>
> On Wed, Aug 9, 2017 at 11:46 AM, Jakub Libosvar 
> <jlibo...@redhat.com<mailto:jlibo...@redhat.com>
> <mailto:jlibo...@redhat.com<mailto:jlibo...@redhat.com>>> wrote:
>
> On 09/08/2017 18:23, Jeremy Stanley wrote:
> > On 2017-08-09 15:29:04 +0200 (+0200), Jakub Libosvar wrote:
> > [...]
> >> Is it possible to switch used image for jenkins machines to use
> >> back the older version? Any other ideas how to deal with the
>     >> kernel bug?
> >
> > Making our images use non-current kernel packages isn't trivial, but
[Mooney, Sean K]  so on that it would bre quite trivial to have disk image 
builder install
The linux-image-virtual-hwe-16.04 linux-image-virtual-hwe-16.04-edge to pull in 
a 4.10 or
4.11 kernel respctivly if the default 4.4 is broken. We just need a new dib 
element to install the package
And modify the  nodepool config to include it when it rebuildes the image every 
night. Alternitivly
You can pull a vanilla kernel form 
http://kernel.ubuntu.com/~kernel-ppa/mainline/
Follow the process documented here https://wiki.ubuntu.com/Kernel/MainlineBuilds
If you want to maintain testing with 4.4.x

> > as Thierry points out in his reply this is not just a problem for
> > our CI system. Basically Ubuntu has broken OpenStack (and probably a
> > variety of other uses of conntrack) for a lot of people following
> > kernel updates in 16.04 LTS so the fix needs to happen there
> > regardless. Right now, basically, Ubuntu Xenial is not a good
> > platform to be running OpenStack on until they get the kernel
> > regression addressed.
>
> True. Fortunately, the impact is not that catastrophic for Neutron as it
> might seem on the first look. Not sure about the other projects, though.
> Neutron doesn't create conntrack entries in production code - only in
> testing. That said, agents should work just fine even with the
> kernel bug.

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-vif] [vif_plug_ovs] Queries on VIF_Type VIFHostDevice

2017-08-09 Thread Mooney, Sean K


> -Original Message-
> From: Moshe Levi [mailto:mosh...@mellanox.com]
> Sent: Wednesday, August 9, 2017 4:47 PM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [os-vif] [vif_plug_ovs] Queries on
> VIF_Type VIFHostDevice
> 
> 
> 
> -----Original Message-
> From: Mooney, Sean K [mailto:sean.k.moo...@intel.com]
> Sent: Wednesday, August 9, 2017 6:36 PM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [os-vif] [vif_plug_ovs] Queries on
> VIF_Type VIFHostDevice
> 
> 
> 
> > -Original Message-
> > From: Moshe Levi [mailto:mosh...@mellanox.com]
> > Sent: Wednesday, August 9, 2017 3:25 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > <openstack-dev@lists.openstack.org>
> > Subject: Re: [openstack-dev] [os-vif] [vif_plug_ovs] Queries on
> > VIF_Type VIFHostDevice
> >
> > Hi,
> >
> > 1) you should use neutron port with vnic_type direct
> > 2) yes,  just use neutron port with vnic_type  direct and confighure
> > the nova compute with pci passthogth whitelist
> > 3) you can configure firewall_driver = openvswitch to work with
> > Conntrack.
> >
> > So in your case if have SR-IOV nic which doesn't support  hardware
> > offload (but has VF representors port)  you will just fallback to the
> > ovs kernel datapath.
> 
> [Mooney, Sean K] that is not what will happen with intel nics and I
> would be doubtful Based on the code I have seen in nova and neutron
> that a fallback will happen with mellanox.
> If the neutron port has vnic_type direct it will Always result in a
> sriov vf being allocated for that port.
> There is no check in nova to ensure ovs support vf configuration and
> there is no check in neutron ml2 driver Either. This is why I wanted
> the feature based scheduling to prevent this from happening as that
> would prevent Nova from allocating the vf which would cause scheduling
> to fail.
> 
> [Moshe Levi] This is not what I meant. I was talking on the
> implementation of the ovs 2.8.0 hardware offload.
> I was referring  for NIC with SR-IOV that support representor ports
> switchdev mode (maybe I miss understood the question).  If it just SR-
> IOV NIC then you are correct.
[Mooney, Sean K] ah yes if the nic and ovs both support representor ports
And tc flower then yes the datapath will auto negociate what can be offloaded
Vs what has to take the exception path via the kernel dataplane. 
> 
> 
> When nova generates the Libvirt xml for that interface it will
> configure that port to use sriov direct pass-through.
> If ovs does not support managing that nic via the representor netdev or
> the nic does not support the tc flower protocol then the port add will
> not fail as we are just adding the representor netdev as a normal port
> But it will not be able to preform any control plane actions on it.
> there is no way for a Libvirt hostdevice to gracefully fall back to the
> kernel dataplane without modifying Xml. After all we are not even
> adding the vf to ovs we are adding a representor port to ovs so the
> dataplane is entirely bypassing ovs for unsupported nics.
> 
> 
> As long as you have the host has vf available and the ovs ml2 driver is
> listed before the sriov nic Agent ml2 driver you will get into this
> broken state.
> 
> > The ovs 2.8.0 code try to offload each datapath rule to NIC hardware
> > if it failed it fails back to the ovs kernel datapath.
> > So if have NIC that can offload classification  on vlan  and action
> > output. Only datapath flows that constructed for this classification
> > and action  will be offload to hardware.
> >
> > -Original Meyssage-
> > From: pranab boruah [mailto:pranabjyotibor...@gmail.com]
> > Sent: Wednesday, August 9, 2017 4:36 PM
> > To: OpenStack Development Mailing List (not for usage questions)
> > <openstack-dev@lists.openstack.org>
> > Subject: [openstack-dev] [os-vif] [vif_plug_ovs] Queries on VIF_Type
> > VIFHostDevice
> >
> > Hi,
> > I am experimenting with the os-vif library and stumbled upon this new
> > VIF type called VIFHostDevice. I have few general queries. TIA.
> >
> > 1. How do I create ports with VIF_type as VIFHostDevice? Looking for
> > the CLI command options.
> >
> >
> > 2. Say, I have OVS running completely on x86 host(no datapath or flow
> > offload to
> >  NIC) as the networking mechanism and a SRIOV capable NIC(for
> > existence of VF representors that wi

Re: [openstack-dev] [os-vif] [vif_plug_ovs] Queries on VIF_Type VIFHostDevice

2017-08-09 Thread Mooney, Sean K


> -Original Message-
> From: Moshe Levi [mailto:mosh...@mellanox.com]
> Sent: Wednesday, August 9, 2017 3:25 PM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [os-vif] [vif_plug_ovs] Queries on
> VIF_Type VIFHostDevice
> 
> Hi,
> 
> 1) you should use neutron port with vnic_type direct
> 2) yes,  just use neutron port with vnic_type  direct and confighure
> the nova compute with pci passthogth whitelist
> 3) you can configure firewall_driver = openvswitch to work with
> Conntrack.
> 
> So in your case if have SR-IOV nic which doesn't support  hardware
> offload (but has VF representors port)  you will just fallback to the
> ovs kernel datapath.

[Mooney, Sean K] that is not what will happen with intel nics and I would be 
doubtful
Based on the code I have seen in nova and neutron that a fallback will happen 
with mellanox.
If the neutron port has vnic_type direct it will Always result in a sriov vf 
being allocated for that port. 
There is no check in nova to ensure ovs support vf configuration and there is 
no check in neutron ml2 driver
Either. This is why I wanted the feature based scheduling to prevent this from 
happening as that would prevent
Nova from allocating the vf which would cause scheduling to fail. 

When nova generates the Libvirt xml for that interface it will configure that 
port to use sriov direct pass-through.
If ovs does not support managing that nic via the representor netdev or the nic 
does not support the
tc flower protocol then the port add will not fail as we are just adding the 
representor netdev as a normal port
But it will not be able to preform any control plane actions on it. there is no 
way for a Libvirt hostdevice
to gracefully fall back to the kernel dataplane without modifying Xml. After 
all we are not even adding the vf
to ovs we are adding a representor port to ovs so the dataplane is entirely 
bypassing ovs for unsupported nics.

As long as you have the host has vf available and the ovs ml2 driver is listed 
before the sriov nic
Agent ml2 driver you will get into this broken state.

> The ovs 2.8.0 code try to offload each datapath rule to NIC hardware if
> it failed it fails back to the ovs kernel datapath.
> So if have NIC that can offload classification  on vlan  and action
> output. Only datapath flows that constructed for this classification
> and action  will be offload to hardware.
> 
> -Original Meyssage-
> From: pranab boruah [mailto:pranabjyotibor...@gmail.com]
> Sent: Wednesday, August 9, 2017 4:36 PM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [os-vif] [vif_plug_ovs] Queries on VIF_Type
> VIFHostDevice
> 
> Hi,
> I am experimenting with the os-vif library and stumbled upon this new
> VIF type called VIFHostDevice. I have few general queries. TIA.
> 
> 1. How do I create ports with VIF_type as VIFHostDevice? Looking for
> the CLI command options.
> 
> 
> 2. Say, I have OVS running completely on x86 host(no datapath or flow
> offload to
>  NIC) as the networking mechanism and a SRIOV capable NIC(for existence
> of VF representors that will be added to the OVS bridge). Can I still
> launch instances with VIF_type as VIFHostDevice?
> 
> 
> 3. I want to use Security Groups using OVS+Conntrack as the mechanism.
> Can I apply SG rules on the ports of type VIFHostDevice using the above
> mechanism?
> 
> PS: I am still trying to understand this. Hence, I might get my
> premises wrong in the above questions. Will appreciate a detailed
> explanation.
> 
> Regards,
> Pranab
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists
> .openstack.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack-
> dev=02%7C01%7Cmoshele%40mellanox.com%7C0af8192c256c42f1252308d4df2
> b96b4%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636378825693889082
> data=iNi%2FLHV5LkTKs8sSpS4BgHU6lwaoywo6O%2BNcF3hqtms%3D=0
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-vif] [vif_plug_ovs] Queries on VIF_Type VIFHostDevice

2017-08-09 Thread Mooney, Sean K


> -Original Message-
> From: pranab boruah [mailto:pranabjyotibor...@gmail.com]
> Sent: Wednesday, August 9, 2017 2:36 PM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [os-vif] [vif_plug_ovs] Queries on VIF_Type
> VIFHostDevice
> 
> Hi,
> I am experimenting with the os-vif library and stumbled upon this new
> VIF type called VIFHostDevice. I have few general queries. TIA.
> 
> 1. How do I create ports with VIF_type as VIFHostDevice? Looking for
> the CLI command options.
[Mooney, Sean K] hi os-vif vif objects such as VIFHostDevice have no direct 
correlation
With the neutron port binding extention vif_type or vnic_type. That is to say 
you
Cannot direcly request VIFHostDevice via the cli by seting a vif_type or 
vnic_type.
The vif object in os vif are datastuctures that encapluate the common datamodel 
that
Descibse a specific network interface type. In the case of VIFHostDevice this 
corresponds
To a sriov VF. This is then paird with a os-vif plugin which encapsulates the 
port binding logic
For plugging these abstract vif into that specific network backend. This is 
combined with an
Os vif port profile object which transports any backend specific info that 
cannot be generically included
Int the os vif vif object. For example vf representor netdev address or a 
vSwitches bridge name. 

> 
> 
> 2. Say, I have OVS running completely on x86 host(no datapath or flow
> offload to
>  NIC) as the networking mechanism and a SRIOV capable NIC(for existence
> of VF representors that will be added to the OVS bridge). Can I still
> launch instances with VIF_type as VIFHostDevice?
[Mooney, Sean K] you can launch an instance with that configuration yes however
You will not have any way to manage that vf via ovs. Libvirt would still
Connect the dataplane to the vm via standard host passthrouhg/sriov howver
Applying action to the representor port attached to the ovs bridge such as
Tagging the interface with a vlan or installing openflow rules to fileter the 
traffic
With the ovs conntrack security group driver would have no effect on dataplane.

> 
> 
> 3. I want to use Security Groups using OVS+Conntrack as the mechanism.
> Can I apply SG rules on the ports of type VIFHostDevice using the above
> mechanism?

[Mooney, Sean K] that should work with a melonox or netroneome smart nic with
A ovs that support the tc flower offload if they have implemented conntrack 
support
But it would not work with a generic nic. That is something that in the future 
we do intend
To support but at present it requires nic support to enable with conntrack. It 
may be possible
To use the learn action openflow security group driver if your nic does not 
support conntrack
For stateless firewalling which is still better then what you have today with 
sriov but the
Bottome line is you need nic support in hardware/firmware and ovs support for 
that nic offload to make this work.

> 
> PS: I am still trying to understand this. Hence, I might get my
> premises wrong in the above questions. Will appreciate a detailed
> explanation.
> 
> Regards,
> Pranab
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [srv-apl-arch:7353] Re: [nova] Discussions for ivshmem support in OpenStack Nova

2017-07-27 Thread Mooney, Sean K


> -Original Message-
> From: TETSURO NAKAMURA [mailto:nakamura.tets...@lab.ntt.co.jp]
> Sent: Thursday, July 27, 2017 2:13 AM
> To: Daniel P. Berrange <berra...@redhat.com>; Mooney, Sean K
> <sean.k.moo...@intel.com>
> Cc: Jay Pipes <jaypi...@gmail.com>; OpenStack Development Mailing List
> (not for usage questions) <openstack-dev@lists.openstack.org>;
> sfinu...@redhat.com; mriede...@gmail.com; 【社内】【ML】srv-apl-arch  apl-a...@lab.ntt.co.jp>
> Subject: Re: [srv-apl-arch:7353] Re: [openstack-dev] [nova] Discussions
> for ivshmem support in OpenStack Nova
> 
> On 2017/07/27 0:58, Daniel P. Berrange wrote:
> > On Wed, Jul 26, 2017 at 11:53:06AM -0400, Jay Pipes wrote:
> >> On 07/26/2017 09:57 AM, Daniel P. Berrange wrote:
> >>> On Wed, Jul 26, 2017 at 09:50:23AM -0400, Jay Pipes wrote:
> >>>> On 07/26/2017 03:06 AM, TETSURO NAKAMURA wrote:
> >>>>> Hi Nova team,
> >>>>>
> >>>>> It has been quite a long time since the last discussion, but let
> >>>>> me make sure one thing about the thread below.
> >>>>>
> >>>>> IIUC, Nova is not welcome ivshmem support because it is no longer
> >>>>> supported by DPDK+QEMU.
> >>>>>
> >>>>> But how would you say if it is supported out of DPDK-tree and can
> >>>>> be used from the newest qemu version ?
> >>>>>
> >>>>> We are now developing SPP, a DPDK-based vswitch, and thinking
> >>>>> about trying to implement ivshmem support under our SPP code tree
> >>>>> if nova (or at first libvirt community) is acceptable for ivshmem
> configuration.
> >>>>>
> >>>>> Your advice will be very helpful for our decision-making in our
> project.
> >>>>
> >>>> I think this is a question that the libvirt community would first
> >>>> need to weigh in on since Nova is downstream from libvirt -- at
> >>>> least in the sense of low-level hypervisor support.
> >>>
> >>> Libvirt already supports ivshmem device config
> >>>
> >>> http://libvirt.org/formatdomain.html#elementsShmem
> >>
> >> Sorry, I suppose I should have said QEMU, not libvirt. Daniel, you
> >> were the one that specifically discouraged doing anything on ivshmem
> to Tetsuro:
> >>
> >> http://lists.openstack.org/pipermail/openstack-dev/2017-
> July/120136.h
> >> tml
> >
> > 'ivshmem' was the original device in QEMU and that is indeed still
> > deprecated.
> >
> > There are now two replacements 'ivshmem-plain' and 'ivshmem-doorbell'
> > which can be used instead, which are considered supported by QEMU,
> > though most people will still recommend using 'vhostuser' instead if
> > the use of ivshmem is at all network related.
> >
> > Regards,
> > Daniel
> >
> 
> Thank you very much for the information about current status of ivshmem
> in QEMU.
> I now understand that 'ivshmem', 'ivshmem-plain' and 'ivshmem-doorbell'
> are different solutions, and libvirt already supports the latter two.
> 
> + Mr. Sean Mooney
> Did you mean that you caution against building new solutions ontop of
> 'ivshmem' or ontop of 'ivshmem-plain' and 'ivshmem-doorbell' too?
[Mooney, Sean K] I would caution against building any networking based uscases 
ontop of ivshmem.
The move to using a memdev instead of directly specifying shm args to qemu for 
ivshmem-plain/doorbell
Should allow hugepage memory to be used instead of posix sheared mem which 
would be needed for spp.

That said just because you can use ivshmem-plain/doorbell with hugepages via a 
memdev form the qemu commandline
does not mean it is supported by Libvirt 
https://libvirt.org/formatdomain.html#elementsShmem or that it is the best 
approach,
upstream development has shifted to virtio and vhost-user.
0 copy tx from a vm is already supported with vhost-user and ovs-dpdk.
0 copy rx Is under development which is the main delta in performance between 
ivshmem and vhost-user that remains.
Both of these features could be ported to the spp.

Vhost-user does have some inherent overhead such as the descriptor rings etc 
required to support virtio
Must be created which is required to support the virtio spec which give you 
portability and performance when
Coupled with multi queue and the dpdk vhost pmd in the guest.

If vhost-user is really not sufficient for your usecase I would first suggest 
extending Libvirt to allow
Passing a memdev name as a parameter to the shmem element.
At that point we could discuss how to request openstack 

Re: [openstack-dev] [nova] Discussions for ivshmem support in OpenStack Nova

2017-07-26 Thread Mooney, Sean K


> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Wednesday, July 26, 2017 2:50 PM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>; TETSURO NAKAMURA
> <nakamura.tets...@lab.ntt.co.jp>; Daniel P. Berrange
> <berra...@redhat.com>; sfinu...@redhat.com; mriede...@gmail.com
> Cc: 【社内】【ML】srv-apl-arch <srv-apl-a...@lab.ntt.co.jp>
> Subject: Re: [openstack-dev] [nova] Discussions for ivshmem support in
> OpenStack Nova
> 
> On 07/26/2017 03:06 AM, TETSURO NAKAMURA wrote:
> > Hi Nova team,
> >
> > It has been quite a long time since the last discussion, but let me
> > make sure one thing about the thread below.
> >
> > IIUC, Nova is not welcome ivshmem support because it is no longer
> > supported by DPDK+QEMU.
> >
> > But how would you say if it is supported out of DPDK-tree and can be
> > used from the newest qemu version ?
> >
> > We are now developing SPP, a DPDK-based vswitch, and thinking about
> > trying to implement ivshmem support under our SPP code tree if nova
> > (or at first libvirt community) is acceptable for ivshmem
> configuration.
> >
> > Your advice will be very helpful for our decision-making in our
> project.
> 
> I think this is a question that the libvirt community would first need
> to weigh in on since Nova is downstream from libvirt -- at least in the
> sense of low-level hypervisor support.
[Mooney, Sean K] well ivshmem was deprecated in dpdk and removed and never 
supported with hugepages memory instead of posixs 
Shared mememy in qemu where it is also deprecated so the first community to 
approach would be qemu.

I would caution against build new solutions ontop of ivshmem unless you have 
first measured and demonstrated that vhost-user is not suitable.
> 
> Best,
> -jay
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-vif] 1.6.1 release for pike.

2017-07-19 Thread Mooney, Sean K
You are right but adding [os-vif] lands it in my os-vif folder os
I guess [openstack-dev][os-vif][nova][neutron] 1.6.1 release for pike 
Would have made it work for everyone :)

> -Original Message-
> From: Matt Riedemann [mailto:mriede...@gmail.com]
> Sent: Tuesday, July 18, 2017 10:35 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [os-vif] 1.6.1 release for pike.
> 
> On 7/18/2017 12:07 PM, Mooney, Sean K wrote:
> > Resending with correct subject line
> 
> The real correct subject line tag would be [nova] or [nova][neutron].
> :P
> 
> --
> 
> Thanks,
> 
> Matt
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [os-vif] 1.6.1 release for pike.

2017-07-18 Thread Mooney, Sean K
Resending with correct subject line

From: Mooney, Sean K
Sent: Tuesday, July 18, 2017 4:54 PM
To: openstack-dev@lists.openstack.org
Cc: Jay Pipes <jpi...@mirantis.com>; 'Stephen Finucane' <sfinu...@redhat.com>; 
Moshe Levi <mosh...@mellanox.com>; 'rbry...@redhat.com' <rbry...@redhat.com>; 
'maxime.le...@6wind.com' <maxime.le...@6wind.com>; sahid 
<sahid.ferdja...@redhat.com>; 'jan.gut...@netronome.com' 
<jan.gut...@netronome.com>
Subject: os-vif 1.6.1 release for pike.


Hi

We are approaching the non-client library freeze on Thursday.
Below are a list of pending patches that I think will be good to review for 
inclusion in pike.

Should have:
Improve OVS Representor Lookup  https://review.openstack.org/#/c/484051/
Add support for VIFPortProfileOVSRepresentor 
https://review.openstack.org/#/c/483921/
unplug_vf_passthrough: don't try to delete representor netdev 
https://review.openstack.org/#/c/478820/


Nice to have:
Add memoize function using oslo.cache https://review.openstack.org/#/c/472773/
Read datapath_type from VIF object https://review.openstack.org/#/c/474914/
doc: Remove cruft from releasenotes conf.py 
https://review.openstack.org/#/c/480092/2

Queens:
add host port profile info class https://review.openstack.org/#/c/441590/
Add abstract OVSDB API https://review.openstack.org/#/c/476612/
Add native implementation OVSDB API https://review.openstack.org/#/c/482226/
*Migration from 'ip' commands to pyroute2 
https://review.openstack.org/#/c/484386/
*Convert all 'ip-link set' commands to pyroute2 
https://review.openstack.org/#/c/451433/
Add Virtual Ethernet device pair  https://review.openstack.org/#/c/484726/
objects: Add 'dns_domain' attribute to 'Network' 
https://review.openstack.org/#/c/480630/
Add Constraints support https://review.openstack.org/#/c/413325/


*These do the same thing.
The items in the should have list are required to complete the netronome and 
mellanox hardware
Accelerated ovs integration.

The nice to have items are small cleanup that are not vitial but would be nice 
to merge sooner
Rather than later. The remaining items while I would like to see merged, I 
think need
More work so I would suggest moving to queens.

If people have time it would be good to review these items today and I will 
submit at patch to
https://github.com/openstack/releases/blob/master/deliverables/pike/os-vif.yaml 
to introduce
version 1.6.1 tomorrow.

Regards
Sean.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] os-vif 1.6.1 release for pike.

2017-07-18 Thread Mooney, Sean K

Hi

We are approaching the non-client library freeze on Thursday.
Below are a list of pending patches that I think will be good to review for 
inclusion in pike.

Should have:
Improve OVS Representor Lookup  https://review.openstack.org/#/c/484051/
Add support for VIFPortProfileOVSRepresentor 
https://review.openstack.org/#/c/483921/
unplug_vf_passthrough: don't try to delete representor netdev 
https://review.openstack.org/#/c/478820/


Nice to have:
Add memoize function using oslo.cache https://review.openstack.org/#/c/472773/
Read datapath_type from VIF object https://review.openstack.org/#/c/474914/
doc: Remove cruft from releasenotes conf.py 
https://review.openstack.org/#/c/480092/2

Queens:
add host port profile info class https://review.openstack.org/#/c/441590/
Add abstract OVSDB API https://review.openstack.org/#/c/476612/
Add native implementation OVSDB API https://review.openstack.org/#/c/482226/
*Migration from 'ip' commands to pyroute2 
https://review.openstack.org/#/c/484386/
*Convert all 'ip-link set' commands to pyroute2 
https://review.openstack.org/#/c/451433/
Add Virtual Ethernet device pair  https://review.openstack.org/#/c/484726/
objects: Add 'dns_domain' attribute to 'Network' 
https://review.openstack.org/#/c/480630/
Add Constraints support https://review.openstack.org/#/c/413325/


*These do the same thing.

The items in the should have list are required to complete the netronome and 
mellanox hardware
Accelerated ovs integration.

The nice to have items are small cleanup that are not vitial but would be nice 
to merge sooner
Rather than later. The remaining items while I would like to see merged, I 
think need
More work so I would suggest moving to queens.

If people have time it would be good to review these items today and I will 
submit at patch to
https://github.com/openstack/releases/blob/master/deliverables/pike/os-vif.yaml 
to introduce
version 1.6.1 tomorrow.

Regards
Sean.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-dev[[nova] Simple question about sorting CPU topologies

2017-06-20 Thread Mooney, Sean K


> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Tuesday, June 20, 2017 5:59 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [openstack-dev[[nova] Simple question
> about sorting CPU topologies
> 
> On 06/20/2017 12:53 PM, Chris Friesen wrote:
> > On 06/20/2017 06:29 AM, Jay Pipes wrote:
> >> On 06/19/2017 10:45 PM, Zhenyu Zheng wrote:
> >>> Sorry, The mail sent accidentally by mis-typing ...
> >>>
> >>> My question is, what is the benefit of the above preference?
> >>
> >> Hi Kevin!
> >>
> >> I believe the benefit is so that the compute node prefers CPU
> >> topologies that do not have hardware threads over CPU topologies
> that
> >> do include hardware threads.
[Mooney, Sean K] if you have not expressed that you want the require or isolate 
policy
Then you really cant infer which is better as for some workloads preferring 
hyperthread
Siblings will improve performance( 2 threads sharing data via l2 cache) and 
other it will reduce it
(2 thread that do not share data) 
> >>
> >> I'm not sure exactly of the reason for this preference, but perhaps
> >> it is due to assumptions that on some hardware, threads will compete
> >> for the same cache resources as other siblings on a core whereas
> >> cores may have their own caches (again, on some specific hardware).
> >
> > Isn't the definition of hardware threads basically the fact that the
> > sibling threads share the resources of a single core?
> >
> > Are there architectures that OpenStack runs on where hardware threads
> > don't compete for cache/TLB/execution units?  (And if there are, then
> > why are they called threads and not cores?)
[Mooney, Sean K] well on x86 when you turn on hypter threading your L1 data and 
instruction cache is
Partitioned in 2 with each half allocated to a thread sibling. The l2 cache 
which is also per core is shared
Between the 2 thread siblings so on intels x86 implementation the thread do not 
compete for l1 cache but do share l2
That could easibly change though in new generations. 

Pre xen architure I believe amd shared the floating point units between each 
smt thread but had separate integer execution units that
Were not shared. That meant for integer heavy workloads there smt 
implementation approached 2X performance limited by the
Shared load and store units and reduced to 0 scaling if both Treads tried to 
access the floating point execution unit concurrently.

So its not quite as clean cut as saying the thread  do or don’t share resources
Each vendor addresses this differently even with in x86 you are not required to 
have the partitioning
described above for cache as intel did or for the execution units. On other 
architectures im sure they have
come up with equally inventive ways to make this an interesting shade of grey 
when describing the difference
between a hardware thread a full core. 

> 
> I've learned over the years not to make any assumptions about hardware.
> 
> Thus my "not sure exactly" bet-hedging ;)
[Mooney, Sean K] yep hardware is weird and will always find ways to break your 
assumptions :)
> 
> Best,
> -jay
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][placement] Allocating Complex Resources

2017-06-07 Thread Mooney, Sean K


> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Wednesday, June 7, 2017 6:47 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova][scheduler][placement] Allocating
> Complex Resources
> 
> On 06/07/2017 01:00 PM, Edward Leafe wrote:
> > On Jun 6, 2017, at 9:56 AM, Chris Dent <cdent...@anticdent.org
> > <mailto:cdent...@anticdent.org>> wrote:
> >>
> >> For clarity and completeness in the discussion some questions for
> >> which we have explicit answers would be useful. Some of these may
> >> appear ignorant or obtuse and are mostly things we've been over
> >> before. The goal is to draw out some clear statements in the present
> >> day to be sure we are all talking about the same thing (or get us
> >> there if not) modified for what we know now, compared to what we
> knew
> >> a week or month ago.
> >
> > One other question that came up: do we have any examples of any
> > service (such as Neutron or Cinder) that would require the modeling
> > for nested providers? Or is this confined to Nova?
> 
> The Cyborg project (accelerators like FPGAs and some vGPUs) need nested
> resource providers to model the relationship between a virtual resource
> context against an accelerator and the compute node itself.
[Mooney, Sean K] neutron will need to use nested resource providers to track
Network backend specific consumable resources in the future also. One example is
is hardware offloaded virtual(e.g. vitio/vhost-user) interfaces which due to
their hardware based implementation are both a finite consumable
resources and have numa affinity and there for need to track and nested.

Another example for neutron would be bandwidth based scheduling / sla 
enforcement
where we want to guarantee that a specific bandwidth is available on the 
selected host
for a vm to consume. From an ovs/vpp/linux bridge perspective this would likely 
be tracked at
the physnet level so when selecting a host we would want to ensure that the 
physent
is both available from the host and has enough bandwidth available to resever 
for the instance.

Today nova and neutron do not track either of the above but at least the lather 
has been started
In the sriov context without placemet and should be extended to other non-sriov 
backend. 
Snabb switch actually supports this already with vendor extentions via the 
neutron bining:profile
https://github.com/snabbco/snabb/blob/b7d6d77ba5fd6a6b9306f92466c1779bba2caa31/src/program/snabbnfv/doc/neutron-api-extensions.md#bandwidth-reservation
but nova is not aware of the capacity or availability info when placing the 
instance so if
the host cannot fufill the request the degrade to the least over subscribed 
port.
https://github.com/snabbco/snabb-neutron/blob/master/snabb_neutron/mechanism_snabb.py#L194-L200

with nested resource providers they could harden this request from best effort 
to a guaranteed bandwidth reservation
by informing the placemnt api of the bandwith availability of the physical 
interface and also the numa affinity the interfaces
by created a nested resource provider. 

> 
> Best,
> -jay
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova-scheduler] Get scheduler hint

2017-05-02 Thread Mooney, Sean K


> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Tuesday, May 2, 2017 5:59 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova-scheduler] Get scheduler hint
> 
> On 05/02/2017 12:33 PM, Giuseppe Di Lena wrote:
> > Thank you a lot! :-).
> > Actually, we are also working in parallel to implement the algorithm
> with tacker, but for this project we will only use the basic modules in
> OpenStack and Heat.
[Mooney, Sean K] if you can use heat you can use the server anti affinity filter
To create a server group per port-pair group and ensure that two instance of 
the same
port-pair-group do not reside on the same server. You could also use 
heat's/senlin's
Scaling groups to define the insantce count of each SF in the chain. 
There was a presentation on this in Barcelona 
https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/15037/on-building-an-auto-healing-resource-cluster-using-senlin
https://www.youtube.com/watch?v=bmdU_m6vRZc
But as jay says below it depends on what you are trying to solve.
If you are trying to model ha constraints for a service chain the above may
Help however it is likely outside the scope of nova/placement api to support 
this directly.

> >
> > If the scheduler hints are no longer supported, what is the correct
> way to give the scheduler personalized input with the instance details?
> > Best regards Giuseppe
> 
> Again, it depends on what problem you are trying to solve with this
> personalized input... what would the scheduler do with the length of
> the service chain as an input? What are you attempting to solve?
> 
> Best,
> -jay
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [intel experimental ci] Is it actually checking anything?

2017-04-18 Thread Mooney, Sean K
> -Original Message-
> From: Mikhail Medvedev [mailto:mihail...@gmail.com]
> Sent: Monday, April 17, 2017 10:51 PM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Cc: openstack-networking-ci <openstack-networking...@intel.com>
> Subject: Re: [openstack-dev] [intel experimental ci] Is it actually
> checking anything?
> 
> On Mon, Apr 17, 2017 at 12:31 PM, Jay Pipes <jaypi...@gmail.com> wrote:
> > Please see the below output from the Intel Experimental CI (from
> > https://review.openstack.org/414769):
> >
> > On 04/17/2017 01:28 PM, Intel Experimental CI (Code Review) wrote:
> >>
> >> Intel Experimental CI has posted comments on this change.
> >>
> >> Change subject: placement: SRIOV PF devices as child providers
> >>
> ..
> >>
> >>
> >> Patch Set 17:
> >>
> >> Build succeeded (check pipeline).
> >>
> >> - tempest-dsvm-full-nfv-xenial
> >> http://intel-openstack-ci-logs.ovh/portland/2017-04-
> 17/414769/17/chec
> >> k/tempest-dsvm-full-nfv-xenial/1bcdb64
> >> : FAILURE in 38m 34s (non-voting)
> >> - tempest-dsvm-intel-nfv-xenial
> >> http://intel-openstack-ci-logs.ovh/portland/2017-04-
> 17/414769/17/chec
> >> k/tempest-dsvm-intel-nfv-xenial/a21d879
> >> : FAILURE in 40m 00s (non-voting)
> >> - tempest-dsvm-multinode-ovsdpdk-nfv-networking-xenial
> >> http://intel-openstack-ci-logs.ovh/portland/2017-04-
> 17/414769/17/chec
> >> k/tempest-dsvm-multinode-ovsdpdk-nfv-networking-xenial/837e59d
> >> : FAILURE in 47m 45s (non-voting)
> >
> >
> > As you can see, it says the build succeeded, but all three jobs in
> the
> > pipeline failed.
> 
> This would happen when CI is voting but all the jobs in a check are
> non-voting. Zuul ignores non-voting job result, and as there isn't a
> single voting job, it reports 'build succeeded'. Maybe it should be a
> zuul bug?
[Mooney, Sean K] yes this is the case we have all jobs currently set to non 
voting.
I had the same question in the past to our ci team zuuls comment is slightly 
non-intuitive
But form zuul point of view the job did succeed as it executed all the ci tasks 
and uploaded the
Results. As to what this is running we are moving the intel-nfv-ci out of our 
development lab
into a datacenter. The experimental ci is a duplicate of our normal intel-nfv-ci
and the intent is to swap the names and decommission our ci in our dev lab by 
the end of April
all going well.
> 
> >
> > Is someone actively looking into this particular 3rd party CI system?
> 
> I do not see anything wrong with that CI (apart from misleading comment
> due to zuul issue I mentioned above).
[Mooney, Sean K] it is being maintained by the same people who are maintaining 
the main intel-nfv-ci.
As I said above we will be swapping the main intel-nfv-ci account over to this 
new hardware later this month all going well.
Once we have swapped over the Intel Experimental CI account will be shut down 
once we decommission the infra in our dev lab. 

> 
> >
> > Best,
> > -jay
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> ---
> Mikhail Medvedev
> IBM
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla-ansible] [kolla] Am I doing this wrong?

2017-01-24 Thread Mooney, Sean K


> -Original Message-
> From: Paul Bourke [mailto:paul.bou...@oracle.com]
> Sent: Tuesday, January 24, 2017 11:49 AM
> To: OpenStack Development Mailing List (not for usage questions)  d...@lists.openstack.org>
> Subject: Re: [openstack-dev] [kolla-ansible] [kolla] Am I doing this wrong?
> 
> Ah, I think you may be misreading what Sean is saying there. What he means is
> kolla-ansible provides the bare minimum config templates to make the service
> work. To template every possible config option would be too much of a
> maintenance burden on the project.
> 
> Of course, users will want to customise these. But instead of modifying the
> templates directly, we recommend you use the "config override"
> mechanism [0]
> 
> This has a number of benefits, the main one being that you can pick up new
> releases of Kolla and not get stuck in merge hell, Ansible will pick up the 
> Kolla base
> templates and merge them with user provided overrides.
[Mooney, Sean K] paul is correct here, I did not intend to suggest that 
kolla-ansible should not
Be used to generate and manage config files. I simply wanted to point out that 
where
Customization is required to a config, it is preferable to use the config 
override mechanism
When possible vs modifying the ansible templates directly.
> 
> Wrt to the fact gathering, I understand your concern, we essentially have the 
> same
> problem in our team. It can be raised again for further discussion, I'm sure 
> there's
> other ways it can be solved.
[Mooney, Sean K] I belive you are intended to be able to use the ansible 
--limit and --tags flags,
To restrict the plays executed and node processed by a deploy and upgrade 
command.
I have used the --tags flags successfully in the past, I have had less success 
with the --limit flag.
In theory with the right combination of --limit and --tag you should be able to 
constrain  the node
On which facts are gathered to just those that would be modified e.g. 2-3 
instead of hundreds. 
> 
> [0]
> http://docs.openstack.org/developer/kolla-ansible/advanced-
> configuration.html#openstack-service-configuration-in-kolla
> 
> -Paul
> 
> On 23/01/17 18:03, Kris G. Lindgren wrote:
> > Hi Paul,
> >
> >
> >
> > Thanks for responding.
> >
> >
> >
> >> The fact gathering on every server is a compromise taken by Kolla to
> >
> >> work around limitations in Ansible. It works well for the majority of
> >
> >> situations; for more detail and potential improvements on this please
> >
> >> have a read of this post:
> >
> >> http://lists.openstack.org/pipermail/openstack-dev/2016-November/1078
> >> 33.html
> >
> >
> >
> > So my problem with this is the logging in to the compute nodes.  While
> > this may be fine for a smaller deployment.  Logging into thousands,
> > even hundreds, of nodes via ansible to gather facts, just to do a
> > deployment against 2 or 3 of them is not tenable.  Additionally, in
> > our higher audited environments (pki/pci) will cause our auditors heartburn.
> >
> >
> >
> >> I'm not quite following you here, the config templates from
> >
> >> kolla-ansible are one of it's stronger pieces imo, they're reasonably
> >
> >> well tested and maintained. What leads you to believe they shouldn't
> >> be
> >
> >> used?
> >
> >>
> >
> >> > * Certain parts of it are 'reference only' (the config tasks),
> >
> >>  > are not recommended
> >
> >>
> >
> >> This is untrue - kolla-ansible is designed to stand up a stable and
> >
> >> usable OpenStack 'out of the box'. There are definitely gaps in the
> >
> >> operator type tasks as you've highlighted, but I would not call it
> >
> >> 'reference only'.
> >
> >
> >
> > http://eavesdrop.openstack.org/irclogs/%23openstack-kolla/%23openstack
> > -kolla.2017-01-09.log.html#t2017-01-09T21:33:15
> >
> >
> >
> >
> > This is where we were told the config stuff was "reference only"?
> >
> >
> >
> >
> ___
> >
> > Kris Lindgren
> >
> > Senior Linux Systems Engineer
> >
> > GoDaddy
> >
> >
> >
> >
> 
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] vhost-user server mode and reconnect

2017-01-17 Thread Mooney, Sean K
Hi everyone
I first proposed a series of patches to enable vhost-user with a
Qemu server/ ovs client topology last july before the relevant changes
To enable this configuration had been release in ovs with dpdk.

Since then ovs 2.6 is out  and shipping, (2.7 will be out soon)
And all of the depdecies on nova, os-vif, dpdk, qemu and the requirements
Repo have been merged.
The final piece to enable this feature with the ovs agents backend is
https://review.openstack.org/#/c/344997/9

it has been a while since this patch was actively reviewed so I have
added everyone who has previously reviewed this change to the to
Line and would ask that if you have time to review it please do.

I would like to get this feature finished and merged before the ocata
Code freeze next week if possible. Given that the code has been largely
Unchanged since your initial review bar addressing comments raised
I think it is in a sable state and ready to merge unless other issues are 
raised.

Regards
Seán
--
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] ovsdpdk mitaka

2017-01-10 Thread Mooney, Sean K
Hi
In mitaka all support for ovs-dpdk was merged upstream into the standard 
neutron openvswitch agent and ml2 driver.
The networking-ovs-dpdk-agent was removed in liberty and the ml2 driver was 
removed in mitaka.
For mitaka+ networking-ovs-dpdk primarily provides a devstack plugin to compile 
and install ovs and dpdk from source, a puppet module(now deprecated) todo the 
same
and a learn action based firewall driver.

The stable mitaka branch of networking-ovs-dpdk has only been tested with 
Ubuntu 14.04,centos7 and I believe fedora 22
It may work on 16.04 but I am not sure if there are systemd patches from newton 
that are not backported.
Regards
sean

From: Shaughnessy, David [mailto:david.shaughne...@intel.com]
Sent: Tuesday, January 10, 2017 12:04 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] ovsdpdk mitaka

Hi Santosh.
There is a getting started guide in the networking-ovs-dpdk project that should 
be of some help.[1]
It’s written for Ubuntu 14.04, but the master branch/ stable newton is for 
Ubuntu 16.04.
Regards.
David.


[1] 
https://github.com/openstack/networking-ovs-dpdk/blob/stable/mitaka/doc/source/getstarted/devstack/ubuntu.rst


From: Santosh S [mailto:santoshsethu2...@gmail.com]
Sent: Tuesday, January 3, 2017 10:45 AM
To: 
openstack-dev@lists.openstack.org; 
Santosh S >
Subject: [openstack-dev] ovsdpdk mitaka


Hello Folks,,

I am a learner in openstack to understand the cloud computing.
Here, I am attempting to install a networking-ovs-dpdk-agent in openstack
mitaka release on controller and compute node setup with ubuntu 16.04.

Could you please help me what steps i need to follow to bring ovspdk up in
this 2 node setup.

It would be great if you help me on this.

Thank you
Santosh

--
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263

This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [poll][kolla] Name of baremetal role/group

2016-09-09 Thread Mooney, Sean K


> -Original Message-
> From: Mark Casey [mailto:markca...@pointofrental.com]
> Sent: Thursday, September 8, 2016 7:38 PM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [poll][kolla] Name of baremetal role/group
> 
> On 9/8/2016 2:21 AM, Martin André wrote:
> > On Wed, Sep 7, 2016 at 11:58 PM, Steven Dake (stdake)
> <std...@cisco.com> wrote:
> >> Sean,
> >>
> >>
> >>
> >> I’d recommend deploy-hosts (I assume this is the bootstrap renamed?)
> > +1
> > I also like deploy-host better than the other proposed names. Can we
> > update the poll to include this option?
> 
> I have some concern that present-day conversation, operator support,
> and code reviews often refer to the host that will run kolla-ansible as
> the 'kolla host' or the 'deploy host'. So I worry this will be very
> ambiguous (i.e.: "your next step is to run deploy-hosts on the
> deployment host to deploy the hosts"). I'd lean more towards
> terminology like "target-prep", "install-node-deps", or similar.
[Mooney, Sean K] I think the topic here has drifted from what I originally want
To capture in this thread but I still find this interesting.

What I was original asking if we should change was the name of the group
In the multimode inventory
https://github.com/openstack/kolla/blob/99897c5438f59b3fa40ade388e0eafe6a0fbfffb/ansible/inventory/multinode#L30
And the corresponding ansible role name.
https://github.com/openstack/kolla/tree/master/ansible/roles/baremetal

this would no be exposed to the enduser at all and was just a question of 
internal naming
of the role/group given that the term baremetal may be confused with ironic or 
bifrost.
> 
> imho this is really an opportunity to be more consistent with these
> terms project-wide or (perhaps more reasonably) just work towards
> making other references match the decision here. We tell a lot of
> people to start with AIO via vagrant and it (the Vagrantfile - likely
> the defacto most-read documentation we have)
[Mooney, Sean K] really I have been using kolla for a while now and I did not
Even know there was a vagrant file. even if I did know it was there it would
Be the last thing I would ever think of reading as documentation the quickstart
Guide and other documentation we have is much more useful. Vagrant certenly 
would
Not be my first choice when introducing some new to kolla. you don’t want to 
have to
Learn another workflow e.g. vagrats when you are trying to wrap your head 
around ansible,jinja and docker.

[Mooney, Sean K] > refers to these as the
> operator host and the nodes, most of the documentation calls them the
> deployment host and either the target nodes or deployment targets, and
> this change would make the name of the step to install dependencies on
> the nodes/target nodes/deployment targets to deploy-host[s] which
> sounds more like a reference to creating multiple instances of the
> deployment host/operator (I know, that's not even a thing).
> 
> Obviously you can't control what Kolla users refer to these components
> as when asking for help or etc., but I suspect it may be frustrating
> for them to use different official names as they make their particular
> progression through stages such as AIO vagrant, AIO baremetal,
> multinode in VMs, and multinode baremetal (actually, if you're having
> to read all of these sentences more slowly or even twice because I'm
> using all of the common terms in all contexts - *that*).  :D
> 
> 
> >> I’d also add a duplicate API of “deploy” and mark deploy as
> >> deprecated and follow the standard deprecation policies.  I’d
> >> recommend making the new OpenStack specific deploy command
> >> deploy-openstack
> > Agreed.
> >
> > Martin
> 
> I think I'm lost on this part. Does 'deploy'/deploy command here refer
> to 'kolla-ansible deploy' or something else entirely? AFAIK that is
> still a wholly separate step, unless we were just trying to make it
> consistent with the role rename at hand.
[Mooney, Sean K] yes this was a separate topic which Is already tracked by 
https://bugs.launchpad.net/kolla/+bug/1616221
> 
> Thank you,
> Mark
> 
> >> Regards
> >>
> >> -steve
> >>
> >>
> >>
> >>
> >>
> >> From: "sean.k.moo...@intel.com" <sean.k.moo...@intel.com>
> >> Reply-To: "OpenStack Development Mailing List (not for usage
> questions)"
> >> <openstack-dev@lists.openstack.org>
> >> Date: Wednesday, September 7, 2016 at 11:51 AM
> >> To: "OpenStack Development Mailing List (not for us

Re: [openstack-dev] [poll][kolla] Name of baremetal role/group

2016-09-09 Thread Mooney, Sean K


From: Steven Dake (stdake) [mailto:std...@cisco.com]
Sent: Wednesday, September 7, 2016 10:59 PM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [poll][kolla] Name of baremetal role/group

Sean,

I’d recommend deploy-hosts (I assume this is the bootstrap renamed?)
[Mooney, Sean K] hi steve this is not related to the command that is run 
(kolla-ansible bootstrap-servers) it is related to the name of the ansible role 
that
Is invoked by the kolla-host playbook  when you execute the “kolla-ansible 
bootstrap-servers” command.

I’d also add a duplicate API of “deploy” and mark deploy as deprecated and 
follow the standard deprecation policies.  I’d recommend making the new 
OpenStack specific deploy command deploy-openstack
[Mooney, Sean K] this is a sperate  item that I agree would be good to do. I 
have a tech debt bug to resolve this so that we will have 
deploy-biforst,deploy-servers and deploy-openstack. I will submit a
Patch to do this before rc1.

Regards
-steve


From: "sean.k.moo...@intel.com<mailto:sean.k.moo...@intel.com>" 
<sean.k.moo...@intel.com<mailto:sean.k.moo...@intel.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, September 7, 2016 at 11:51 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [poll][kolla] Name of baremetal role/group

Hi
I recently introduced a new baremetal role/group which is used as part of the 
kolla-host playbook.
https://github.com/openstack/kolla/tree/master/ansible/roles/baremetal
This baremetal role is used to install all the dependencies required to deploy 
kolla containers on a “baremetal” host.
The host does not have to be baremetal it can be a vm but the term baremetal 
was originally chosen as unlike other rules in
Kolla it installs and configures packages on the host os.

Given that kolla also has baremetal as a service via ironic and baremetal 
provision of servers with bifrost the question I would like
To ask is should we change the name of the current role to install the kolla 
dependencies to something else.

I have created a strawpoll link for this here http://www.strawpoll.me/11175159
The options available in the strawpool are:

· kolla-host

· host

· baremetal

· pre-install
If there are any other suggestions fell free to discuss them in this thread.
I will check the poll Friday evening gmt and submit a patch for review if 
consensus is that it should be changed.

Regards
Sean.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [poll][kolla] Name of baremetal role/group

2016-09-07 Thread Mooney, Sean K
Hi
I recently introduced a new baremetal role/group which is used as part of the 
kolla-host playbook.
https://github.com/openstack/kolla/tree/master/ansible/roles/baremetal
This baremetal role is used to install all the dependencies required to deploy 
kolla containers on a "baremetal" host.
The host does not have to be baremetal it can be a vm but the term baremetal 
was originally chosen as unlike other rules in
Kolla it installs and configures packages on the host os.

Given that kolla also has baremetal as a service via ironic and baremetal 
provision of servers with bifrost the question I would like
To ask is should we change the name of the current role to install the kolla 
dependencies to something else.

I have created a strawpoll link for this here http://www.strawpoll.me/11175159
The options available in the strawpool are:

* kolla-host

* host

* baremetal

* pre-install
If there are any other suggestions fell free to discuss them in this thread.
I will check the poll Friday evening gmt and submit a patch for review if 
consensus is that it should be changed.

Regards
Sean.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Need clarify about baremetal host group and role in Ansible

2016-09-07 Thread Mooney, Sean K


> -Original Message-
> From: duon...@vn.fujitsu.com [mailto:duon...@vn.fujitsu.com]
> Sent: Wednesday, August 24, 2016 5:42 AM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [kolla] Need clarify about baremetal host
> group and role in Ansible
> 
> Hi all, sean-k-mooney,
> 
> In recent baremetal patchset [1] from sean-k-mooney, file
> ansible/inventory/multinode has following code snippet:
> 
> > [baremetal:children]
> > control
> > network
> > compute
> > storage
> 
> But all-in-one inventory does not have any change, so I have some
> questions after read through code base:
> - Why all-in-one is treated differently.
[Mooney, Sean K] in the all in one config the host is also the build host for 
the docker images.
I have not written the code to deploy the build host yet hence why its treated 
differently currently.
> - Do you treat every nodes as baremetal node?
[Mooney, Sean K] yes the baremetal group define all nodes that should be 
Prepared for use in hosting kolla services so it should include all nodes in 
the cloud.
Baremetal in this context has noting to do with ironic or biforst but rather 
the kolla host playbook.
I will be sending a separate mail later today dissucssing if we should change 
the name of the role and
Group for more clarity but itially the kolla-host playbook was being called the 
baremetal playbook to indicate
That it made changes to the host unlike the other kolla playbooks that do not.

> If the answer is "yes" so why we put it in "baremetal" role/group, I
> think it is quite misleading.
> - Why many host setup playbooks are placed in baremetal role? I think
> we can factor out to more general role.
> 
> Fix me if I wrong.
> 
> 
> [1] https://review.openstack.org/#/c/325631
> 
> Best regards,
> 
> duonghq
> PODC - Fujitsu Vietnam Ltd.
> 
> 
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][networking-sfc] need help on requesting release for networking-sfc

2016-09-01 Thread Mooney, Sean K


> -Original Message-
> From: Cathy Zhang [mailto:cathy.h.zh...@huawei.com]
> Sent: Thursday, September 1, 2016 8:03 PM
> To: Ihar Hrachyshka <ihrac...@redhat.com>; Armando M.
> <arma...@gmail.com>; Cathy Zhang <cathy.h.zh...@huawei.com>
> Cc: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [Neutron][networking-sfc] need help on
> requesting release for networking-sfc
> 
> Thanks for all your response.
> 
> We would like to have the stable branch pulled from a git commit.
> Shall we use the git hash of that commit for the intended git hash in
> the release request?
> 
> I am confused about the following statement in the release guide.
> "You need to be careful when picking a git commit to base new releases
> on. In most cases, you’ll want to tag the merge commit that merges your
> last commit in to the branch. This bug shows an instance where this
> mistake was caught. Notice the difference between the incorrect commit
> and the correct one which is the merge commit. git log 6191994..22dd683
> --oneline shows that the first one misses a handful of important
> commits that the second one catches. This is the nature of merging to
> master."
> 
> What is meant by " tag the merge commit"? How do we tag a git commit on
> our master branch?
[Mooney, Sean K] cathy if networking-sfc is setup the way I setup 
networking-ovs-dpdk
e.g. following the old infra new projects guide the Core team has the right to 
push signed tags.
In this case you would checkout the commit you want To tag.
Then you would tag it with "git tag -s x.y.z" I think its -s to sign the tag 
but its been a
While since I did it  last then you do "git push --tags gerrit" to push the tag 
to the repo.
Becareful to include at least 3 section in the version for it to be a valid tag 
for pypi packaging.

That said I don’t know if you have to tag the repo manually if you are using 
the openstack/release repo.

> 
> Thanks,
> Cathy
> 
> -Original Message-
> From: Ihar Hrachyshka [mailto:ihrac...@redhat.com]
> Sent: Thursday, September 01, 2016 4:22 AM
> To: Armando M.
> Cc: Cathy Zhang; OpenStack Development Mailing List (not for usage
> questions)
> Subject: Re: [openstack-dev] [Neutron][networking-sfc] need help on
> requesting release for networking-sfc
> 
> Armando M. <arma...@gmail.com> wrote:
> 
> >
> >
> > On 31 August 2016 at 17:31, Cathy Zhang <cathy.h.zh...@huawei.com>
> wrote:
> > CC OpenStack alias.
> >
> >
> >
> > From: Cathy Zhang
> > Sent: Wednesday, August 31, 2016 5:19 PM
> > To: Armando Migliaccio; Ihar Hrachyshka; Cathy Zhang
> > Subject: need help on requesting release for networking-sfc
> >
> >
> >
> > Hi Armando/Ihar,
> >
> >
> >
> > I would like to submit a request for a networking-sfc release. I did
> > this for previous branch release by submitting a bug request in
> > launchpad before. I see that other subproject, such as L2GW, did this
> > in Launchpad for mitaka release too.
> >
> > But the Neutron stadium link
> >
> http://docs.openstack.org/developer/neutron/stadium/sub_project_guidel
> > ines.html#sub-project-release-process
> > states that “A sub-project owner proposes a patch to
> > openstack/releases repository with the intended git hash. The Neutron
> > release liaison should be added in Gerrit to the list of reviewers
> for the patch”.
> >
> >
> >
> > Could you advise which way I should go or should I do both?
> >
> >
> > Consider the developer documentation the most up to date process, so
> > please go ahead with a patch against the openstack/releases repo.
> 
> Right. There was a recent change to the process that streamlined
> release requests and hopefully made them a tad easier for both
> subproject owners as well as release liaison. Please stick to the
> latest version of the process as described in devref in master branch
> of neutron repo.
> 
> Ihar
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-ovs-dpdk][dvr] how to use dvr with networking-ovs-dpdk

2016-08-24 Thread Mooney, Sean K
You can enable dvr today with ovs-dpdk(e.g. netdev datapath) and it will 
function well enough to pass tempet test but no.

Dvr with the netdev datapath has very poor performance. So much so that it is 
not useable in a production deployment.

This is because dvr uses kernel namespace + tap devices to perform the routing. 
Tap devices are not accelerated by dpdk
So when you add them to the dpdk datapath they are not processed in a polling 
manner by the dpdk pmd thread. Tap device
When attached to the netdev datapath are instead processed by the single 
threaded netdev datapath resulting in a maximum forwarding
Rate of ~40,000 pps. To put that in perspective kernel ovs can forward packets 
via a tap device at ~480,000pps
and dpdk will give you 5mpps+ via vhost-user so using dvr with ovs-dpdk today 
is not a viable option.

I did a poc of dvr style routing with neutron 12 months ago based on work I did 
in 2014. The blueprint was rejected as I had not figured out how to fully 
eliminate the network namespaces
In the north sourth case. We started looking at this problem again a few weeks 
ago and hope to develop a solution that will work with all ovs datapath In 
ocata.

Today the best way to get dvr style routing with ovs-dpdk that perfroms well is 
to use a controller such as ovn or odl which implement routing as openflow 
rules.
Using openflow rules for routing which is how my poc worked is more efficient 
then kernel routing even with kernel ovs. With ovs-dpdk openflow routing removes
The bottleneck that is introduced by the linux kernel interfaces allowing the 
full performance of the datapath to be maintained. Both ovn and odl have
Support for ovs-dpdk/vhost-user and both provide openflow based routing that 
can be used today.

Our current recommended configuration when using the ovs neutron agent with 
ovs-dpdk compute nodes is to use centralized ha routing on network nodes 
running kernel ovs or
Use provider routing. Both solutions will give significant performance 
improvements over dvr with ovs-dpdk.

Regards
Sean.

From: huangdenghui [mailto:hdh_1...@163.com]
Sent: Wednesday, August 24, 2016 3:29 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [neutron][networking-ovs-dpdk][dvr] how to use dvr 
with networking-ovs-dpdk

hi
Is it possible to use dvr with networking-ovs-dpdk now? If not, is it on 
the roadmap of networking-ovs-dpdk?


发自网易邮箱手机版

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-ovs-dpdk] conntrack security group driver with ovs-dpdk

2016-08-16 Thread Mooney, Sean K


> -Original Message-
> From: Assaf Muller [mailto:as...@redhat.com]
> Sent: Monday, August 15, 2016 2:50 PM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Cc: Mooney, Sean K <sean.k.moo...@intel.com>
> Subject: Re: [openstack-dev] [neutron][networking-ovs-dpdk] conntrack
> security group driver with ovs-dpdk
> 
> + Jakub.
> 
> On Wed, Aug 10, 2016 at 9:54 AM,
> <kostiantyn.volenbovs...@swisscom.com> wrote:
> > Hi,
> >> [Mooney, Sean K]
> >> In ovs 2.5 only linux kernel conntrack was supported assuming you
> had
> >> a 4.x kernel that supported it. that means that the feature was not
> >> available on bsd,windows or with dpdk.
> > Yup, I also thought about something like that.
> > I think I was at-least-slightly misguided by
> > http://docs.openstack.org/draft/networking-guide/adv-config-
> ovsfwdrive
> > r.html
> > and there is currently a statement
> > "The native OVS firewall implementation requires kernel and user
> space support for conntrack, thus requiring minimum versions of the
> Linux kernel and Open vSwitch. All cases require Open vSwitch version
> 2.5 or newer."
> 
> I agree, that statement is misleading.
[Mooney, Sean K] the 2.6 branch now exists so it is probably ok to refer to
2.6 now. https://github.com/openvswitch/ovs/commits/branch-2.6
The release should be made ~ September 15th
https://github.com/openvswitch/ovs/blob/797dad21566fecc60de3ce6f93c81ad55a61fe86/Documentation/release-process.md#release-scheduling
which will be before then next openstack release.
if you would like I can update the networking guide to refect the change in ovs.

> 
> >
> > Do you agree that this is something to change? I think it is not OK
> to state OVS 2.6 without that being released, but in case I am not
> confusing then:
> > -OVS firewall driver with OVS that uses kernel datapath requires OVS
> > 2.5 and Linux kernel 4.3 -OVS firewall driver with OVS that uses
> > userspace datapath with DPDK (aka ovs-dpdk  aka DPDK vhost-user aka
> netdev datapath) doesn't have a Linux kernel prerequisite That is
> documented in table in " ### Q: Are all features available with all
> datapaths?":
> > http://openvswitch.org/support/dist-docs/FAQ.md.txt
> > where currently 'Connection tracking' row says 'NO' for 'Userspace' -
> > but that's exactly what has been merged recently /to become feature
> of
> > OVS 2.6
> >
> > Also when it comes to performance I came across
> > http://openvswitch.org/pipermail/dev/2016-June/071982.html, but I
> would guess that devil could be the exact flows/ct actions that will be
> present in real-life scenario.
> >
> >
> > BR,
> > Konstantin
> >
> >
> >> -Original Message-
> >> From: Mooney, Sean K [mailto:sean.k.moo...@intel.com]
> >> Sent: Tuesday, August 09, 2016 2:29 PM
> >> To: Volenbovskyi Kostiantyn, INI-ON-FIT-CXD-ELC
> >> <kostiantyn.volenbovs...@swisscom.com>; openstack-
> >> d...@lists.openstack.org
> >> Subject: RE: [openstack-dev] [neutron][networking-ovs-dpdk]
> conntrack
> >> security group driver with ovs-dpdk
> >>
> >>
> >> > -Original Message-
> >> > From: kostiantyn.volenbovs...@swisscom.com
> >> > [mailto:kostiantyn.volenbovs...@swisscom.com]
> >> > Sent: Tuesday, August 9, 2016 12:58 PM
> >> > To: openstack-dev@lists.openstack.org; Mooney, Sean K
> >> > <sean.k.moo...@intel.com>
> >> > Subject: RE: [openstack-dev] [neutron][networking-ovs-dpdk]
> >> > conntrack security group driver with ovs-dpdk
> >> >
> >> > Hi,
> >> > (sorry for using incorrect threading)
> >> >
> >> > > > About 2 weeks ago I did some light testing with the conntrack
> >> > > > security group driver and the newly
> >> > > >
> >> > > > Merged upserspace conntrack support in ovs.
> >> > > >
> >> > By 'recently' - whether you mean patch v4
> >> > http://openvswitch.org/pipermail/dev/2016-June/072700.html
> >> > or you used OVS 2.5 itself (which I think includes v2 of the same
> >> > patch series)?
> >> [Mooney, Sean K] I used http://openvswitch.org/pipermail/dev/2016-
> >> June/072700.html or specifically i used the following commit
> >>
> https://github.com/openvswitch/ovs/commit/0c87efe4b5017de4c5ae99e7b9c
> >> 3
> >> 6e8a6e846669
> >> which is just after userspac

Re: [openstack-dev] [Nova][API] Need naming suggestions for "capabilities"

2016-08-15 Thread Mooney, Sean K


> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Monday, August 15, 2016 3:34 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Nova][API] Need naming suggestions for
> "capabilities"
> 
> On 08/15/2016 09:27 AM, Andrew Laski wrote:
> > Currently in Nova we're discussion adding a "capabilities" API to
> > expose to users what actions they're allowed to take, and having
> > compute hosts expose "capabilities" for use by the scheduler. As much
> > fun as it would be to have the same term mean two very different
> > things in Nova to retain some semblance of sanity let's rename one or
> > both of these concepts.
> >
> > An API "capability" is going to be an action, or URL, that a user is
> > allowed to use. So "boot an instance" or "resize this instance" are
> > capabilities from the API point of view. Whether or not a user has
> > this capability will be determined by looking at policy rules in
> place
> > and the capabilities of the host the instance is on. For instance an
> > upcoming volume multiattach feature may or may not be allowed for an
> > instance depending on host support and the version of nova-compute
> > code running on that host.
> >
> > A host "capability" is a description of the hardware or software on
> > the host that determines whether or not that host can fulfill the
> > needs of an instance looking for a home. So SSD or x86 could be host
> > capabilities.
> > https://github.com/jaypipes/os-
> capabilities/blob/master/os_capabilitie
> > s/const.py
> > has a list of some examples.
> >
> > Some possible replacement terms that have been thrown out in
> > discussions are features, policies(already used), grants, faculties.
> > But none of those seemed to clearly fit one concept or the other,
> except policies.
> >
> > Any thoughts on this hard problem?
> 
> I know, naming is damn hard, right? :)
> 
> After some thought, I think I've changed my mind on referring to the
> adjectives as "capabilities" and actually think that the term
> "capabilities" is better left for the policy-like things.
> 
> My vote is the following:
> 
> GET /capabilities <-- returns a set of *actions* or *abilities* that
> the user is capable of performing
> 
> GET /traits <-- returns a set of *adjectives* or *attributes* that may
> describe a provider of some resource
> 
> I can rename os-capabilities to os-traits, which would make Sean Mooney
> happy I think and also clear up the terminology mismatch.
[Mooney, Sean K] yep I like that suggestion though I'm fine with either.
os-traits is nice and short and I like the delineation between attributes and 
abilities.
> 
> Thoughts?
> -jay
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] os-capabilities library created

2016-08-12 Thread Mooney, Sean K


> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Friday, August 12, 2016 2:20 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] os-capabilities library created
> 
> On 08/12/2016 04:05 AM, Daniel P. Berrange wrote:
> > On Wed, Aug 03, 2016 at 07:47:37PM -0400, Jay Pipes wrote:
> >> Hi Novas and anyone interested in how to represent capabilities in a
> >> consistent fashion.
> >>
> >> I spent an hour creating a new os-capabilities Python library this
> evening:
> >>
> >> http://github.com/jaypipes/os-capabilities
> >>
> >> Please see the README for examples of how the library works and how
> >> I'm thinking of structuring these capability strings and symbols. I
> >> intend os-capabilities to be the place where the OpenStack community
> >> catalogs and collates standardized features for hardware, devices,
> >> networks, storage, hypervisors, etc.
> >>
> >> Let me know what you think about the structure of the library and
> >> whether you would be interested in owning additions to the library
> of
> >> constants in your area of expertise.
> >
> > How are you expecting that these constants are used ? It seems
> > unlikely the, say nova code, code is going to be explicitly accessing
> > any of the individual CPU flag constants.
> 
> These capability strings are what deployers will associate with a
> flavor in Nova and they will be passed in the request to the placement
> API in either a "requirements" or a "preferences" list. In order to
> ensure that two OpenStack clouds refer to various capabilities (not
> just CPU flags, see below), we need a curated list of these
> standardized constants.
> 
>  > It should surely just be entirely metatadata
> > driven - eg libvirt driver would just parse libvirt capabilities XML
> > and extract all the CPU flag strings & simply export them.
> 
> You are just thinking in terms of (lib)virt/compute capabilities.
> os-capabilities intends to provide a standard set of capability
> constants for more than virt/compute, including storage, network
> devices and more.
> 
> But, yes, I imagine discovery code running on a compute node with the
> *libvirt* virt driver could indeed simply query the libvirt
> capabilities XML snippet and translate those capability codes into os-
> capabilities constants. Remember, VMWare and Hyper-V also need to do
> this discovery and translation to a standardized set of constants. So
> does ironic-inspector when it queries an IPMI interface of course.
> 
>  > It would be very
> > undesirable to have to add new code to os-capabilities every time
> that
> > Intel/AMD create new CPU flags for new features, and force users to
> > upgrade openstack to be able to express requirements on those CPU
> flags.
> 
> I don't see how we would be able to expose a particular new CPU flag
> *across disparate OpenStack clouds* unless we have some standardized
> set of constants that has been curated. Not all OpenStack clouds run
> libvirt. And again, think bigger than just virt/compute.
[Mooney, Sean K] just as an aside I think Libvirt actually gets its capability
Information from udev. Again that wont help you on windows but it's at least not
Requiring Libvirt.  os-capabilities could retrieve info via udev also 
potentially.

Ipmi will allow you to discover some capabilities of the system but
It might be worth considering redfish fit for capabilities discovery
http://www.dmtf.org/standards/redfish
https://www.brighttalk.com/webcast/9077/163783

On a personal note could we call os-capabilities, os-caps?
Its shorter and  I have misspelled capabilities 4 different ways in typing this 
Response which I have now fixed it.
> 
> Best,
> -jay
> 
> >> Next steps for the library include:
> >>
> >> * Bringing in other top-level namespaces like disk: or net: and
> >> working with contributors to fill in the capability strings and
> symbols.
> >> * Adding constraints functionality to the library. For instance,
> >> building in information to the os-capabilities interface that would
> >> allow a set of capabilities to be cross-checked for set violations.
> >> As an example, a resource provider having DISK_GB inventory cannot
> >> have *both* the disk:ssd
> >> *and* the disk:hdd capability strings associated with it -- clearly
> >> the disk storage is either SSD or spinning disk.
> >
> > Regards,
> > Daniel
> >
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][networking-ovs-dpdk] conntrack security group driver with ovs-dpdk

2016-08-09 Thread Mooney, Sean K

> -Original Message-
> From: kostiantyn.volenbovs...@swisscom.com
> [mailto:kostiantyn.volenbovs...@swisscom.com]
> Sent: Tuesday, August 9, 2016 12:58 PM
> To: openstack-dev@lists.openstack.org; Mooney, Sean K
> <sean.k.moo...@intel.com>
> Subject: RE: [openstack-dev] [neutron][networking-ovs-dpdk] conntrack
> security group driver with ovs-dpdk
> 
> Hi,
> (sorry for using incorrect threading)
> 
> > > About 2 weeks ago I did some light testing with the conntrack
> > > security group driver and the newly
> > >
> > > Merged upserspace conntrack support in ovs.
> > >
> By 'recently' - whether you mean patch v4
> http://openvswitch.org/pipermail/dev/2016-June/072700.html
> or you used OVS 2.5 itself (which I think includes v2 of the same patch
> series)?
[Mooney, Sean K] I used 
http://openvswitch.org/pipermail/dev/2016-June/072700.html or specifically
i used the following commit 
https://github.com/openvswitch/ovs/commit/0c87efe4b5017de4c5ae99e7b9c36e8a6e846669
which is just after userspace conntrack was merged,
> 
> So in general - I am a bit confused about conntrack support in OVS.
> 
> OVS 2.5 release notes http://openvswitch.org/pipermail/announce/2016-
> February/81.html state:
> "This release includes the highly anticipated support for connection
> tracking in the Linux kernel.  This feature makes it possible to
> implement stateful firewalls and will be the basis for future stateful
> features such as NAT and load-balancing.  Work is underway to bring
> connection tracking to the userspace datapath (used by DPDK) and the
> port to Hyper-V."  - in the way that 'work is underway' (=work is
> ongoing) means that a time of OVS 2.5 release the feature was not
> 'classified' as ready?
[Mooney, Sean K] 
In ovs 2.5 only linux kernel conntrack was supported assuming you had a
4.x kernel that supported it. that means that the feature was not available on 
bsd,windows or with dpdk.

In the upcoming ovs 2.6 release conntrack support has been added to the 
Netdev datapath which is used with dpdk and on bsd. As far as I am aware 
windows conntrack support is still
Missing but I may be wrong.

If you are interested the devstack local.conf I used to test that it functioned 
is available here
http://paste.openstack.org/show/552434/

I used an OpenStack vm using the Ubuntu 16.04 and 2 e1000 interfaces to do the 
testing.


> 
> 
> BR,
> Konstantin
> 
> 
> 
> > On Sat, Aug 6, 2016 at 8:16 PM, Mooney, Sean K
> <sean.k.moo...@intel.com>
> > wrote:
> > > Hi just a quick fyi,
> > >
> > > About 2 weeks ago I did some light testing with the conntrack
> security
> > > group driver and the newly
> > >
> > > Merged upserspace conntrack support in ovs.
> > >
> > >
> > >
> > > I can confirm that at least form my initial smoke tests where I
> > >
> > > Uses netcat ping and ssh to try and establish connections between
> two
> > > vms the
> > >
> > > Conntrack security group driver appears to function correctly with
> the
> > > userspace connection tracker.
> > >
> > >
> > >
> > > We have not looked at any of the performance yet but assuming it is
> at
> > > an acceptable level I am planning to
> > >
> > > Deprecate the learn action based driver in networking-ovs-dpdk and
> > > remove it once  we have cut the stable newton
> > >
> > > Branch.
> > >
> > >
> > >
> > > We hope to do some rfc 2544 throughput testing to evaluate the
> > > performance sometime mid-September.
> > >
> > > Assuming all goes well I plan on enabling the conntrack based
> security
> > > group driver by default when the
> > >
> > > Networking-ovs-dpdk devstack plugin is loaded. We will also
> evaluate
> > > enabling the security group tests
> > >
> > > In our third party ci to ensure it continues to function correctly
> > > with ovs-dpdk.
> > >
> > >
> > >
> > > Regards
> > >
> > > Seán
> > >
> > >
> > >
> > >
> > >
> > _
> > _
> > >  OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> > _
> > _
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][neutron] - best way to load 8021q kernel module into cirros

2016-08-06 Thread Mooney, Sean K

From: Kevin Benton [mailto:ke...@benton.pub]
Sent: Friday, August 5, 2016 10:37 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [infra][neutron] - best way to load 8021q kernel 
module into cirros


Hi,

In neutron there is a new feature under active development to allow a VM to 
attach to many networks via its single interface using VLAN tags.
 [Mooney, Sean K] In this case I take it that you want to create a scenario 
test that will cover teh vlan aware vms work is that correct?

We would like this to be tested in a scenario test in the gate, but in order to 
do that the guest instance must have support for VLAN tags (the 8021q kernel 
module for Linux VMs). Cirros does not ship with this module so I have a few 
questions.
[Mooney, Sean K] Is there a reason you cannot use a Ubuntu or centos cloud 
image for the guest for this test?
both would require the vm flavor to have at least 256mb of ram but I think that 
should be fine.

Do any other projects need to load a kernel module for a specific test? If not, 
where would the best place be to store the module so we can load it for that 
test; or, should we download it directly from the Internet (worried about the 
stability of this)?
[Mooney, Sean K]  how big is it? Would it fit on a configdrive/retrieve it via 
the metatdata service.
looking at https://bugs.launchpad.net/cirros/+bug/1605832 they are suggesting 
using or add a get-kernel-module command but if it was small
you could just store it in the metatdata service/config drive or even swift and 
just curl it locally and run insmod to insert it.





Thanks,
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][networking-ovs-dpdk] conntrack security group driver with ovs-dpdk

2016-08-06 Thread Mooney, Sean K
Hi just a quick fyi,
About 2 weeks ago I did some light testing with the conntrack security group 
driver and the newly
Merged upserspace conntrack support in ovs.

I can confirm that at least form my initial smoke tests where I
Uses netcat ping and ssh to try and establish connections between two vms the
Conntrack security group driver appears to function correctly with the 
userspace connection tracker.

We have not looked at any of the performance yet but assuming it is at an 
acceptable level I am planning to
Deprecate the learn action based driver in networking-ovs-dpdk and remove it 
once  we have cut the stable newton
Branch.

We hope to do some rfc 2544 throughput testing to evaluate the performance 
sometime mid-September.
Assuming all goes well I plan on enabling the conntrack based security group 
driver by default when the
Networking-ovs-dpdk devstack plugin is loaded. We will also evaluate enabling 
the security group tests
In our third party ci to ensure it continues to function correctly  with 
ovs-dpdk.

Regards
Seán

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage Capabilities with ResourceProvider

2016-08-01 Thread Mooney, Sean K


> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Monday, August 1, 2016 1:09 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage
> Capabilities with ResourceProvider
> 
> On 07/31/2016 10:03 PM, Alex Xu wrote:
> > 2016-07-28 22:31 GMT+08:00 Jay Pipes <jaypi...@gmail.com
> > <mailto:jaypi...@gmail.com>>:
> >
> > On 07/20/2016 11:25 PM, Alex Xu wrote:
> >
> > One more for end users: Capabilities Discovery API, it should be
> > 'GET
> > /resource_providers/tags'. Or a proxy API from nova to the placement
> > API?
> >
> >
> > I would imagine that it should be a `GET
> > /resource-providers/{uuid}/capabilities` call on the placement API,
> > only visible to cloud administrators.
> >
> > When the end-user request a capability which doesn't support by the
> > cloud, the end-user needs to wait for a moment after sent boot request
> > due to we use async call in nova, then he get an instance with error
> > status. The error info is no valid host. If this is the only way for
> > user to discover the capabilities in the cloud, that sounds bad. So we
> > need an API for the end-user to discover the Capabilities which are
> > supported in the cloud, the end-user can query this API before send
> > boot request.
> 
> Ah, yes, totally agreed. I'm not sure if that is something that we'd want to 
> put as a
> normal-end-user-callable API endpoint in the placement API, but certainly we
> could do something like this in the placement API:
> 
>   GET /capabilities
> 
> Would return a list of capability strings representing the distinct set of 
> capabilities
> that any resource provider in the system exposed. It would not give the user 
> any
> counts of resource providers that expose the capabilities, nor would it 
> provide
> any information regarding which resource providers had any available inventory
> for a consumer to use.
> 
> Nova could then either have a proxy API call that would add the normal 
> end-user
> interface to that information or completely hide it from end users via the 
> existing
> flavors interface?
[Mooney, Sean K] the main drawback with that as an end user is you cannot tell 
what combination of capabilities will
Work together.  For example a cloud might provide SSDs and GPUs but they may 
not be provided on the
Same host or indeed still available on the same host though in the latter case 
no valid host would be the expected behavior.
That said this can be somewhat mitigated via operators creating flavors that 
will work with their infra which is a reasonable requirement
For us to ask them to fulfill but tenant could still uploads images with 
capability request or indeed craft boot requests that would still fail.
You would basically need to return a list of capability  adjacency lists so 
that the end user could build the matrix of what features can be requested 
together.
That would potentially be computationally intensive in the api but mysql should 
be able to compute it efficiently. 
> 
> Thoughts?
> 
> Best,
> -jay
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] libvirt/qemu source install plugin.

2016-07-26 Thread Mooney, Sean K
Hi I was not aware of the 
Plugin tar installer but it would not have been usefully in my case as 
I needed to build from specific git commit id not release tars.

For my use case I also need the ability to apply patches automatically to 
evaluate change
To qemu and Libvirt before they are merged upstream.

It would be good to see if we could combine the two though to duplicate
Code to build and install Libvirt and qemu.

If there is no object I think it still makes sense to create a 
openstack/devstack-plugin-libvirt-qemu repo then as the 
devstack-plugin-tar-installer
expcitly will be using tar files not git repos.


> -Original Message-
> From: Michele Paolino [mailto:m.paol...@virtualopensystems.com]
> Sent: Tuesday, July 26, 2016 1:40 PM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Cc: Kashyap Chamarthy <kcham...@redhat.com>;
> mzoel...@linux.vnet.ibm.com; Mooney, Sean K <sean.k.moo...@intel.com>
> Subject: Re: [openstack-dev] [devstack] libvirt/qemu source install
> plugin.
> 
> All,
> 
> the purpose of the devstack-plugin-tar-installer[1] is exactly what you
> mentioned: a tool needed to test experimental features in libvirt and
> qemu. I am planning to release a new version next week, addressing some
> of the comments received, however new testers/developers are more than
> welcome! Sean, maybe you can have a look at the code and, if you are
> interested, we can discuss how to proceed further.
> 
> I also think it would be nice if we can join all together the efforts
> on this project[2], as I believe this is an interesting feature for
> devstack. Maybe there is also a way to integrate this work with the
> gate Markus was mentioning.
> 
> Thank you Kashyap for pointing this out!
> 
> Regards,
> 
> [1]https://review.openstack.org/#/c/313568/
> [2]https://review.openstack.org/#/q/project:openstack/devstack-plugin-
> tar-installer
> 
> On 07/26/2016 01:13 PM, Kashyap Chamarthy wrote:
> > On Thu, Jul 21, 2016 at 02:25:46PM +0200, Markus Zoeller wrote:
> >> On 20.07.2016 22:38, Mooney, Sean K wrote:
> >>> Hi
> >>> I recently had the need to test a feature (vhost-user reconnect)
> >>> that was commit to the qemu source tree a few weeks ago. As there
> >>> has been no release since then I needed to build from source so to
> >>> that end I wrote a small devstack plugin to do just that.
> >>>
> >>> I was thinking of opening a review to create a new repo to host the
> >>> plugin under The openstack namespace
> >>> (openstack/devstack-plugin-libvirt-qemu) but before I do I wanted
> to
> >>> ask if others are interested In a devstack plugin that just
> compiles
> >>> and installs qemu and Libvirt?
> >>>
> >>> Regards Sean.
> >>>
> >> tonby and I try to make the devstack plugin "additional package
> repos"
> >> (apr) work [1]. What you did is within the scope of that project. We
> >> also have an experimental job
> >> "gate-tempest-dsvm-nova-libvirt-kvm-apr"[2].  The last time I worked
> >> on this I wasn't able to create installable *.deb packages from
> >> libvirt + qemu source code. Other work items did then get more
> >> important and I had to pause the work on that.  I think we can work
> >> together to combine our efforts there.
> > NB: There's also in-progress work to allow configuring libvirt / QEMU
> > from source tar balls, as an external DevStack plugin:
> >
> >  https://review.openstack.org/#/c/313568/ -- Plugin to setup
> >  libvirt/QEMU from tar releases
> >
> > It was originally proposed (now abandoned, in favour of the above) as
> > a patch to DevStack proper, but was abandoned, as it was suggested to
> > make it as external plugin:
> >
> >  https://review.openstack.org/#/c/108714/
> >
> >> References:
> >> [1]
> >> https://github.com/openstack/devstack-plugin-additional-pkg-repos/
> >> [2]
> >> https://github.com/openstack-infra/project-
> config/blob/master/jenkins
> >> /jobs/devstack-gate.yaml#L565-L595
> >>
> 
> --
> Michele Paolino


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] libvirt/qemu source install plugin.

2016-07-20 Thread Mooney, Sean K
Hi
I recently had the need to test a feature (vhost-user reconnect) that was 
commit to the
qemu source tree a few weeks ago. As there has been no release since then I 
needed
to build from source so to that end I wrote a small devstack plugin to do just 
that.

I was thinking of opening a review to create a new repo to host the plugin under
The openstack namespace (openstack/devstack-plugin-libvirt-qemu) but before
I do I wanted to ask if others are interested In a devstack plugin that just 
compiles
and installs qemu and Libvirt?

Regards
Sean.





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage Capabilities with ResourceProvider

2016-07-20 Thread Mooney, Sean K


> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Wednesday, July 20, 2016 7:16 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage
> Capabilities with ResourceProvider
> 
> On 07/13/2016 01:37 PM, Ed Leafe wrote:
> > On Jul 11, 2016, at 6:08 AM, Alex Xu <sou...@gmail.com> wrote:
> >
> >> For example, the capabilities can be defined as:
> >>
> >> COMPUTE_HW_CAP_CPU_AVX
> >> COMPUTE_HW_CAP_CPU_SSE
> >> 
> >> COMPUTE_HV_CAP_LIVE_MIGRATION
> >> COMPUTE_HV_CAP_LIVE_SNAPSHOT
> >> 
> >>
> >> ( The COMPUTE means this is coming from Nova. HW means this is
> >> hardware related Capabilities. HV means this is  capabilities of
> >> Hypervisor. But the catalog of Capabilities can be discussed
> >> separated. This propose focus on the  ResourceTags. We also have
> >> another idea about not using 'PREFIX' to manage the Tags. We can add
> >> attributes to the  Tags. Then we have more control on the Tags. This
> >> will describe separately in the bottom. )
> >
> > I was ready to start ranting about using horribly mangled names to
> represent data, and then saw your comment about attributes for tags.
> Yes, a thousand times yes to attributes! There can be several
> standards, such as ‘compute’ or ‘networking’ that we use for some basic
> cross-cloud compatibility, but making them flexible is a must for
> adoption.
> 
> I disagree :) Adoption -- at least interoperable cloud adoption -- of
> this functionality will likely be hindered by super-flexible
> description of capabilities. I think having a set of "standard"
> capabilities that can be counted on to be cross-OpenStack-cloud
> compatible and a set of "dynamic" capabilities that are custom to a
> deployment would be a good thing to do.

[Mooney, Sean K] 
I know there is a bad memories when I metion CIM 
(http://www.dmtf.org/standards/cim)
for many on the nova team but if we are to use standard names we should probably
actually assess are there existing standads that we could adopt instead of 
defining
our own standard names in nova for the resources. 
For example 
http://schemas.dmtf.org/wbem/cim-html/2/CIM_ProcessorAllocationSettingData.html
Define the name for different instcution set extentions for example avx is 
DMTF:x86:AVX.
Some work has been done in glance to allow importing cim metadata from ovf 
files also
https://specs.openstack.org/openstack/glance-specs/specs/mitaka/implemented/cim-namespace-metadata-definitions.html

while I don’t think using the full cim information model is useful in this case 
using the name would be
from an inter-operability point of view as we not only would have standard 
names in openstack but those names
would conform to an existing standard.

We could still allow custom attribute but is see value in standardizing what 
can be standardized.


> 
> Best,
> -jay
> 
> > I can update the qualitative request spec to add ResourceProviderTags
> as a possible implementation.
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet][networking-ovs-dpdk] Request to add puppet-dpdk module

2016-07-08 Thread Mooney, Sean K
Is there a reason that you are starting a new project instead of contributing to
The networking-ovs-dpdk puppet module?

Networking-ovs-dpdk was created to host both the integration code with neutron 
and then deployment tool
Support for deploying ovs with dpdk for differnet tools.

Currently we support devstack and we have developed a puppet module.
The puppet module was developed with the express intention of integrating it 
with 
Fuel, packstack and trippleo at a later date. It was created to be a reusable 
module for
Other tools to use and build on top of.

I will be working on kolla support upstream in kolla this cycle with 
networking-ovs-dpdk providing
Source install support in addition the binary install support that will be 
submitted to kolla.

A fule plugin(developed in opnfv) was planned to be added to this repo but that 
has now been
Abandoned as support is been added to fuel core instead.

If there is a good technical reason for a separate repo then that is ok but 
otherwise it
Seams wasteful to start another project to develop a puppet module to install 
ovs with dpdk.

Are there any featues missing form netoworking-ovs-dpdk puppet module that you 
require?
it should be noted that we will be adding support for binary installs from 
package manages
and persistent installs (auto loading kernel driver, persistent binding of 
nics) this cycle but if you have
any other feature gaps we would be happy to hear about them.

Regards
Sean.
 



> -Original Message-
> From: Saravanan KR [mailto:skram...@redhat.com]
> Sent: Friday, July 08, 2016 8:33 AM
> To: OpenStack Development Mailing List (not for usage questions)  d...@lists.openstack.org>
> Cc: Emilien Macchi ; Jaganathan Palanisamy
> ; Vijay Chundury 
> Subject: Re: [openstack-dev] [puppet] Request to add puppet-dpdk module
> 
> Also, there is a repository networking-ovs-dpdk[1] for all the dpdk related
> changes including puppet. We considered both (puppet-vswitch and networking-
> ovs-dpdk).
> 
> And we had chat with Emilien about this. His suggestion is to have it as a 
> separate
> project to make the modules cleaner like 'puppet-dpdk'.
> 
> Regards,
> Saravanan KR
> 
> [1] https://github.com/openstack/networking-ovs-dpdk
> 
> On Fri, Jul 8, 2016 at 2:36 AM, Russell Bryant  wrote:
> >
> >
> > On Thu, Jul 7, 2016 at 5:12 AM, Saravanan KR  wrote:
> >>
> >> Hello,
> >>
> >> We are working on blueprint [1] to integrate DPDK with tripleo. In
> >> the process, we are planning to add a new puppet module "puppet-dpdk"
> >> for the required puppet changes.
> >>
> >> The initial version of the repository is at github [2]. Note that the
> >> changes are not complete yet. It is in progress.
> >>
> >> Please let us know your views on including this new module.
> >>
> >> Regards,
> >> Saravanan KR
> >>
> >> [1] https://blueprints.launchpad.net/tripleo/+spec/tripleo-ovs-dpdk
> >> [2] https://github.com/krsacme/puppet-dpdk
> >
> >
> > I took a quick look at Emilien's request.  In general, including this
> > functionality in the puppet openstack project makes sense to me.
> >
> > It looks like this is installing and configuring openvswitch-dpdk.
> > Have you considered integrating DPDK awareness into the existing
> > puppet-vswitch that configures openvswitch?  Why is a separate puppet-dpdk
> needed?
> >
> > --
> > Russell Bryant
> >
> >
> __
> 
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][ovs] The way we deal with MTU

2016-07-08 Thread Mooney, Sean K


From: Armando M. [mailto:arma...@gmail.com]
Sent: Tuesday, June 14, 2016 12:50 PM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [neutron][ovs] The way we deal with MTU



On 13 June 2016 at 22:22, Terry Wilson 
<twil...@redhat.com<mailto:twil...@redhat.com>> wrote:
> So basically, as long as we try to plug ports with different MTUs into the 
> same bridge, we are utilizing a bug in Open vSwitch, that may break us any 
> time.
>
> I guess our alternatives are:
> - either redesign bridge setup for openvswitch to e.g. maintain a bridge per 
> network;
> - or talk to ovs folks on whether they may support that for us.
>
> I understand the former option is too scary. It opens lots of questions, 
> including upgrade impact since it will obviously introduce a dataplane 
> downtime. That would be a huge shift in paradigm, probably too huge to 
> swallow. The latter option may not fly with vswitch folks. Any better ideas?

I know I've heard from people who'd like to be able to support both
DPDK and non-DPDK workloads on the same node. The current
implementation with a single br-int (and thus datapath) makes that
impossible to pull of with good performance. So there may be other
reasons to consider introducing multiple isolated bridges: MTUs,
datapath_types, etc.

[Mooney, Sean K]
I just noticed this now but I just wanted to share some of the rational as to 
why we explicitly do not support running both datapaths on the same host today.
We experiment with using both datapaths during the juno cycle when we were 
frist upstreaming support for ovs-dpdk.
To efficiently enable both data paths we determined that you would have to 
duplicate all bridges  for each data path otherwise there is a significant 
performance
Penalty that degrades the performance of both datapaths.

The only way to interconnect bridge of different data paths in ovs is to use 
veth pairs. Even in the case of the kernel datapath the use
Of veth pairs is a significant performance hit compared to patchports.  Adding 
a veth interface to the dpdk datapath is very costly from
A dpdk perpective to rx/tx packets as it take significantly more cpu cycles to 
process packet from veth interfaces then dpdk interfaces.

What we determined at the time was to make this configuration work effectively 
you would have to have 2 copies of every bridge  and
Either modify the existing agent significantly or run two copies of the ovs 
agent on the same host.  If you use two agents on the same host
With two configfiles specifying different bridge names e.g. br-int and 
br-int-dpdk  br-tun,br-tun-dpdk and br-ex br-ex-dpdk it should be possible to 
support today.

You might need to make minor changes to the agent and server to ensure the 
agents are both reported separately in the db and
You would need to provide some mechanism to request the use of kernel vhost or 
vhost-user. Unfortunately there is no construct currently in
Neutron that can be used directly for that and also the nova scheduler does not 
currently have any idea regarding the vif-types or networking backend supported 
on each
Compute host.

The scheduler side could be addressed by reusing the resource provider 
framework that jay pipes is working on. In essence each compute node would be a 
provider of vif-types.
When you boot a vm you would also pass a desired vif-type and when nova is 
scheduling it will fileter to only host of that type. When nova asks neutron to 
bind the
Port it would pass the requested vif-type to neutron which would then use it 
for the port binding. Ian wells and I proposed a mechanism for this over the 
last few cycles that
Should be possible to intergrate cleanly with os-vif when nova and neutron have 
both adopted its uses.
https://review.openstack.org/#/c/190917/7/specs/mitaka/approved/nova-neutron-binding-negotiation.rst

while requesting a vif type is somewhat of a leaky abstraction it does not mean 
that you will know what the neutron backend is.
A vhost-user interface for example could be ovs-dpdk or vpp or snabb swtich or 
ovs-fastpath. So while it is leaking the capability
To provide a vhost-user interface it does not leak the implantation which still 
maintains some level of abstraction and flexablity
For an operator. For a tenant other then performance they cannot detect if they 
are using vhost-user or kernel-vhost in any way since they
All they see is a virtiio-net interface in either case.

if there is interest in supporting both datapaths concurrently and people are 
open to having multiple copies of the ovs l2, and possible l3/dhcp agents on 
the same host
then I would be happy to help with that effort but the added complexity and 
operator overhead of managing two copies of the neutron agents on each host is 
why we
have not tried to enable this configuration  to date.


Incidentally this is something that Nova is a

Re: [openstack-dev] [kolla][ironic] My thoughts on Kolla + BiFrost integration

2016-06-30 Thread Mooney, Sean K


> -Original Message-
> From: Steven Dake (stdake) [mailto:std...@cisco.com]
> Sent: Monday, June 27, 2016 9:21 PM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [kolla][ironic] My thoughts on Kolla +
> BiFrost integration
> 
> 
> 
> On 6/27/16, 11:19 AM, "Devananda van der Veen"
> <devananda@gmail.com>
> wrote:
> 
> >At a quick glance, this sequence diagram matches what I
> >envisioned/expected.
> >
> >I'd like to suggest a few additional steps be called out, however I'm
> >not sure how to edit this so I'll write them here.
> >
> >
> >As part of the installation of Ironic, and assuming this is done
> >through Bifrost, the Actor should configure Bifrost for their
> >particular network environment. For instance: what eth device is
> >connected to the IPMI network; what IP ranges can Bifrost assign to
> >physical servers; and so on.
> >
> >There are a lot of other options during the install that can be
> >changed, but the network config is the most important. Full defaults
> >for this roles' config options are here:
> >
> >https://github.com/openstack/bifrost/blob/master/playbooks/roles/bifro
> s
> >t-i
> >ronic-install/defaults/main.yml
> >
> >and documentation is here:
> >
> >https://github.com/openstack/bifrost/tree/master/playbooks/roles/bifro
> s
> >t-i
> >ronic-install
> >
> >
> >
> >Immediately before "Ironic PXE boots..." step, the Actor must perform
> >an action to "enroll" hardware (the "deployment targets") in Ironic.
> >This could be done in several ways: passing a YAML file to Bifrost;
> >using the Ironic CLI; or something else.
> >
> >
> >"Ironic reports success to the bootstrap operation" is ambiguous.
> >Ironic does not currently support notifications, so, to learn the
> >status of the deployments, you will need to poll the Ironic API (eg,
> >"ironic node-list").
> >
> 
> Great,
> 
> Thanks for the feedback.  I'll integrate your changes into the sequence
> diagram when I have a free hour or so - whenever that is :)
> 
> Regards
> -steve
[Mooney, Sean K] I agree with most of devananda points and had come to similar
Conlcutions.

At a highlevel I think the workflow from 0 to cloud would be as follow.
Assuming you have one linux system.
- clone http://github.com/openstack/kolla && cd kolla
- tools/kolla-host build-host-deploy
This will install ansible if not installed then invoke a playbook to 
install
All build dependencies and generate the kolla-build.conf passwords.yml 
and global.yml.
 Install kolla python package
- configure kolla-build.conf as required
- tools/build.py or kolla-build to build image
- configure global.yml and or biforst specific file 
  This would involve specifying a file that can be used with bifrost dynamic 
inventory.
  Configuring network interface for bifrost to use.
  Enable ssh-key generate or supply one to use as the key to us when connecting 
to the servers post deploy.
  Configure diskimage builder options or supply path to a file on the system to 
use as your os image.
- tools/kolla-host deploy-bifrost
  Deploys bifrost container.
  Copies images/keys
  Bootstraps bifrost and start services.
- tools/kolla-host deploy-servers
  Invokes bifrost enroll and deploy dynamic then polls until all
  Servers are provisioned or a server fails.
- tools/kolla-hosts bootstrap-servers
  Installs all kolla deploy dependencies
  Docker ect. This will also optionally do things such as
  Configure hugepages, configure cpu isolation, firewall settings
  Or any other platform level config for example apply labels to ceph
  Disks .
  This role will reboot the remote server at the end of the role if required
  e.g. after installing The wily kernel on Ubuntu 14.04
- configure global.yml as normal
- tools/kolla-ansible prechecks (this should now pass)
- tools/kolla-ansible deploy
- profit

I think this largely agrees with the diagram you proposed but has a couple of 
extra steps/details.

> 
> >
> >
> >Cheers,
> >--Devananda
> >
> >On 06/23/2016 06:54 PM, Steven Dake (stdake) wrote:
> >> Hey folks,
> >>
> >> I created the following sequence diagram to show my thinking on
> >>Ironic  integration.  I recognize some internals of the recently
> >>merged bifrost changes  are not represented in this diagram.  I would
> >>like to see a bootstrap action do  all of the necessary things to
> >>bring up BiFrost in a container using Sean's WIP  K

Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports

2016-06-15 Thread Mooney, Sean K


> -Original Message-
> From: Peters, Rawlin [mailto:rawlin.pet...@hpe.com]
> Sent: Wednesday, June 15, 2016 7:02 PM
> To: Kevin Benton <ke...@benton.pub>
> Cc: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability
> for wiring trunk ports
> 
> On Tuesday, June 14, 2016 6:27 PM, Kevin Benton (ke...@benton.pub)
> wrote:
> > >which generates an arbitrary name
> >
> > I'm not a fan of this approach because it requires coordinated
> assumptions.
> > With the OVS hybrid plug strategy we have to make guesses on the agent
> > side about the presence of bridges with specific names that we never
> > explicitly requested and that we were never explicitly told about. So
> > we end up with code like [1] that is looking for a particular end of a
> > veth pair it just hopes is there so the rules have an effect.
[Mooney, Sean K] I really would like to avoid encoding knowledge to 
Generate the names the same way in both neutron and os-vf/nova or having any
Other special casing to figure out the bridge or interface names.

> 
> I don't think this should be viewed as a downside of Strategy 1 because,
> at least when we use patch port pairs, we can easily get the peer name
> from the port on br-int, then use the equivalent of "ovs-vsctl iface-to-
> br "
> to get the name of the bridge. If we allow supporting veth pairs to
> implement the subports, then getting the arbitrary trunk bridge/veth
> names isn't as trivial.
> 
> This also brings up the question: do we even need to support veth pairs
> over patch port pairs anymore? Are there any distros out there that
> support openstack but not OVS patch ports?
[Mooney, Sean K] that is a separate discussions
In general im in favor of deprecating support for veth interconnect with ovs
And removing it in ocata.
I belive I was originally added in juno for centos and suse as then did not
Support ovs 2.0 or there kernel ovs module did not support patchports.
As far as I aware  there is no major linux os version that does not have patch 
Support in ovs and also meets the minimum python version of 2.7 required by 
OpenStack
So this functionality could safely be removed.

> 
> >
> > >it seems that the LinuxBridge implementation can simply use an L2
> > >agent extension for creating the vlan interfaces for the subports
> >
> > LinuxBridge implementation is the same regardless of the strategy for
> > OVS. The whole reason we have to come up with these alternative
> > approaches for OVS is because we can't use the obvious architecture of
> > letting it plug into the integration bridge due to VLANs already being
> > used for network isolation. I'm not sure pushing complexity out to
> > os-vif to deal with this is a great long-term strategy.
> 
> The complexity we'd be pushing out to os-vif is not much worse than the
> current complexity of the hybrid_ovs strategy already in place today.
[Mooney, Sean K] I don’t think strategy 1 is the correct course
Of action long-term with the trunk bridge approch. I honestly think that
The patch port creation should be the responsibility of the ovs agent alone.

I think the DRY principle applies in this respect also. The ovs agent will
Be required to add or remove patch ports after the vm is booted if subports
Are added/removed from the truck port. I don’t think it make sense to
Write the code to do that both in the ovs agent and separately in os-vif.

Having os-vif simply create the bridge if it does not exist and 
Add the port to it is a much simpler solution in that respect as you can reuse
The patch port code that is already in neutron and not duplicate it in os-vif.
https://github.com/openstack/neutron/blob/master/neutron/agent/common/ovs_lib.py#L368-L371


> 
> >
> > >Also, we didn’t make the OVS agent monitor for new linux bridges in
> > >the hybrid_ovs strategy so that Neutron could be responsible for
> > >creating the veth pair.
> >
> > Linux Bridges are outside of the domain of OVS and even its agent. The
> > L2 agent doesn't actually do anything with the bridge itself, it just
> > needs a veth device it can put iptables rules on. That's in contrast
> > to these new OVS bridges that we will be managing rules for, creating
> > additional patch ports, etc.
> 
> I wouldn't say linux bridges are totally outside of its domain because
> it relies on them for security groups. Rather than relying on an
> arbitrary naming convention between Neutron and Nova, we could've
> implemented monitoring for new linux bridges to create veth pairs and
> firewall rules on. I'm glad we didn't, because that logic is specific to
>

Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports

2016-06-14 Thread Mooney, Sean K
Well in terms of the ovs plugin change  for strategy 2 that requires  a single 
function call
Here: https://github.com/openstack/os-vif/blob/master/vif_plug_ovs/ovs.py#L109
And here: 
https://github.com/openstack/os-vif/blob/master/vif_plug_ovs/ovs.py#L84
and  one new function in 
https://github.com/openstack/os-vif/blob/master/vif_plug_ovs/linux_net.py
with unit test it probably <100 lines of code.

For strategy 1 we would need to do a little more work as we would have to pass 
two bridges
but as you said creating the bridge if it does not exist is needed in either 
case.


From: Kevin Benton [mailto:ke...@benton.pub]
Sent: Tuesday, June 14, 2016 10:49 AM
To: Daniel P. Berrange 
Cc: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for 
wiring trunk ports


Yep, and both strategies depend on that "create if not exists" logic so it 
makes sense to at least get that implemented while we continue to argue about 
which strategy to use.
On Jun 14, 2016 02:43, "Daniel P. Berrange" 
> wrote:
On Tue, Jun 14, 2016 at 02:35:57AM -0700, Kevin Benton wrote:
> In strategy 2 we just pass 1 bridge name to Nova. That's the one that is
> ensures is created and plumbs the VM to. Since it's not responsible for
> patch ports it doesn't need to know anything about the other bridge.

Ok, so we're already passing that bridge name - all we need change is
make sure it is actuall created if it doesn't already exist ? If so
that sounds simple enough to add to os-vif - we already have exactly
the same logic for the linux_bridge plugin


Regards,
Daniel
--
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][os-vif] Expanding vif capability for wiring trunk ports

2016-06-13 Thread Mooney, Sean K


> -Original Message-
> From: Daniel P. Berrange [mailto:berra...@redhat.com]
> Sent: Monday, June 13, 2016 1:12 PM
> To: Armando M. <arma...@gmail.com>
> Cc: Carl Baldwin <c...@ecbaldwin.net>; OpenStack Development Mailing
> List <openstack-dev@lists.openstack.org>; Jay Pipes
> <jaypi...@gmail.com>; Maxime Leroy <maxime.le...@6wind.com>; Moshe Levi
> <mosh...@mellanox.com>; Russell Bryant <rbry...@redhat.com>; sahid
> <sahid.ferdja...@redhat.com>; Mooney, Sean K <sean.k.moo...@intel.com>
> Subject: Re: [Neutron][os-vif] Expanding vif capability for wiring trunk
> ports
> 
> On Mon, Jun 13, 2016 at 02:08:30PM +0200, Armando M. wrote:
> > On 13 June 2016 at 10:35, Daniel P. Berrange <berra...@redhat.com>
> wrote:
> >
> > > On Thu, Jun 09, 2016 at 05:31:13PM -0600, Carl Baldwin wrote:
> > > > Hi,
> > > >
> > > > You may or may not be aware of the vlan-aware-vms effort [1] in
> > > > Neutron.  If not, there is a spec and a fair number of patches in
> > > > progress for this.  Essentially, the goal is to allow a VM to
> > > > connect to multiple Neutron networks by tagging traffic on a
> > > > single port with VLAN tags.
> > > >
> > > > This effort will have some effect on vif plugging because the
> > > > datapath will include some changes that will effect how vif
> > > > plugging is done today.
> > > >
> > > > The design proposal for trunk ports with OVS adds a new bridge for
> > > > each trunk port.  This bridge will demux the traffic and then
> > > > connect to br-int with patch ports for each of the networks.
> > > > Rawlin Peters has some ideas for expanding the vif capability to
> > > > include this wiring.
> > > >
> > > > There is also a proposal for connecting to linux bridges by using
> > > > kernel vlan interfaces.
> > > >
> > > > This effort is pretty important to Neutron in the Newton
> > > > timeframe.  I wanted to send this out to start rounding up the
> > > > reviewers and other participants we need to see how we can start
> > > > putting together a plan for nova integration of this feature (via
> os-vif?).
> > >
> > > I've not taken a look at the proposal, but on the timing side of
> > > things it is really way to late to start this email thread asking
> > > for design input from os-vif or nova. We're way past the spec
> > > proposal deadline for Nova in the Newton cycle, so nothing is going
> > > to happen until the Ocata cycle no matter what Neutron want  in
> Newton.
> >
> >
> > For sake of clarity, does this mean that the management of the os-vif
> > project matches exactly Nova's, e.g. same deadlines and processes
> > apply, even though the core team and its release model are different
> from Nova's?
> > I may have erroneously implied that it wasn't, also from past talks I
> > had with johnthetubaguy.
> 
> No, we don't intend to force ourselves to only release at milestones
> like nova does. We'll release the os-vif library whenever there is new
> functionality in its code that we need to make available to
> nova/neutron.
> This could be as frequently as once every few weeks.
[Mooney, Sean K] 
I have been tracking contributing to the vlan aware vm work in 
neutron since the Vancouver summit so I am quite familiar with what would have
to be modified to support the vlan trucking. Provided the modifications do not
delay the conversion to os-vif in nova this cycle I would be happy to review
and help develop the code to support this use case.

In the ovs case at lease which we have been discussing here
https://review.openstack.org/#/c/318317/4/doc/source/devref/openvswitch_agent.rst
no changes should be required for nova and all changes will be confined to the 
ovs
plugin. In is essence check if bridge exists, if not create it with port id,
Then plug as normal.

Again though I do agree that we should focus on completing the initial nova 
integration
But I don't think that mean we have to exclude other feature enhancements as 
long as they
do not prevent us achieving that goal.


> 
> Regards,
> Daniel
> --
> |: http://berrange.com  -o-
> http://www.flickr.com/photos/dberrange/ :|
> |: http://libvirt.org  -o- http://virt-
> manager.org :|
> |: http://autobuild.org   -o-
> http://search.cpan.org/~danberr/ :|
> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-
> vnc :|
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [scheduler] New filter: AggregateInstanceAffinityFilter

2016-06-02 Thread Mooney, Sean K


> -Original Message-
> From: Alonso Hernandez, Rodolfo
> [mailto:rodolfo.alonso.hernan...@intel.com]
> Sent: Thursday, June 2, 2016 6:00 PM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [nova] [scheduler] New filter:
> AggregateInstanceAffinityFilter
> 
> Hello:
> 
> For the last two cycles we have tried to introduce a new filter to be
> able to interact better with the aggregates, using the metadata to
> accept or reject an instance depending on the flavor:
>   https://review.openstack.org/#/c/189279/
> 
> This filter was reverted and we agreed to present a new one, being
> backwards compatible with AggregateInstanceExtraSpecsFilter and adding
> more flexibility to the original filter. We have this proposal and we
> ask you to review it:
>   https://review.openstack.org/#/c/314097/
> 
> Regards.
> 
> PD: I know the non-priority feature spec freeze is today and that's why
> I'm asking you to take a look at it.
> 

[Mooney, Sean K] looks like you forgot to disable the automatic footer 
now that you have your new laptop. 
For the res of the list please ignore  the footer.
But reviews would be welcome

> --
> Intel Research and Development Ireland Limited Registered in Ireland
> Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
> Registered Number: 308263
> 
> 
> This e-mail and any attachments may contain confidential material for
> the sole use of the intended recipient(s). Any review or distribution by
> others is strictly prohibited. If you are not the intended recipient,
> please contact the sender and delete all copies.
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] [bifrost] bifrost container poc update

2016-05-13 Thread Mooney, Sean K
Hi this is an  update on where I am with my poc of creating a bifost container 
for kolla.
As of  Wednesday evening I have reached my v0 poc goal

That goal was to demonstrate it was easy bifrost in a container with little to 
know changes.
This is further broken down as follows.

-Create poc patch to split bifrost ironic install role into install, 
bootstrap and start phases

-Use kolla build.py to build a container with all bifrost/ironic 
dependences installed by running
bifrost install phase as part of the docker build

-Spawn an instance of the resulting container then bootstrap and start 
ironic by running bifrost install playbook  with only the bootstrap and run 
phases enabled

-Enroll a physical node using the enroll-dynamic playbook

-Deploy the default OS image to the physical node using the 
deploy-dynamic playbook

The attached file assume basic knowledge of how to build images with kola and 
documents how the command I
Ran to preform this poc.

Limitations of v0:


-Only tested with centos source build

-Fat container. Uses systemd as an init system in the container. (yes 
this works fine)

-Requires privileged (required for networking and  mounting loopback 
devices)

-Requires net=host (as an infrastructure container on the undercloud 
this shold be ok)

-Requires /sys/fs/cgroups to be bind mounted for systemd

-Requires /dev to be bind mounted  to allow building of Baremetal image

-No integration with kola deploy playbooks or script to automate 
deployment (fix in v1)

-No support for external config (fix in v1)

-I wrote it in 12-18 hours so no comments or docs except the attachment

-Ironic service done restart automatically on restarting container.( 
rerun install playbook with skip_boostrap=true and skip_install=true to fix)

Next steps:

Define scope of v1

-Should I open a blueprint/spec/bug in kolla and or bifrost?

-Kolla spec to cover container and ansible integration 
(https://github.com/SeanMooney/kolla/commit/bbbfc573dcd8e20ad912dedeecc0b3994832925f)

o   Support builds of bifrost container both source and binary with all 4 base 
OS

o   Support baremetal image customization via external config. (os,packages) 
and bring your own model to supply your own image.

o   Integrate with kolla deploy playbooks.

o   Add bifrost.rst to kolla docs

o   Thin containers or supervisord as a v2

-Bifrost spec to cover ironic install  decomposition  
(https://github.com/SeanMooney/bifrost/commit/e223f4fe73871b76ce87999470a1efc43862671e)

o   Split install ironic playbook into 3 phases (install,bootstarp,start)

§  possible solutions. Skip_* as in poc, separate roles or tags

o   Replace use of sed for fixing hostname as it fails in a container
(https://github.com/openstack/bifrost/blob/master/playbooks/roles/bifrost-ironic-install/tasks/main.yml#L117-L123)

o   Introduce install_dib  to control if disk image build is installed of 
checking if the image should be built.

-Testing: kolla ci job? Test in bifrost ci?

So this is the point I pause for feedback:
Does this sound like a reasonable next step?
Am I going in the wrong direction with this?
If I create spec or when I submit the code for review would anyone in 
particular like me to add them as reviewers?

Regards
Seán


# clone kolla biforst poc
git clone https://github.com/SeanMooney/kolla.git
cd kolla && git chekcout bifrost
# set up koll dependcies as normal
# generate kolla build
tox -e genconfig
#

modify kolla-build as follows.

set install_type to source

update bifrost-base as follows

[bifrost-base]

#
# From kolla
#

# Source location type (string value)
# Allowed values: local, git, url
type = git

# The location for source install (string value) location = 
https://github.com/SeanMooney/bifrost.git

# Git reference to pull, commit sha, tag or branch name (string value) 
reference = kolla


#build container
tools/build.py bifrost-systemd

manually run bifrost container.
docker run -it --net=host -v /dev:/dev -d --privileged --name bifrost
192.168.1.51:5000/kollaglue/centos-source-bifrost-systemd:2.0.0

# fix hosts file and add hostname to 127.0.0.1 line nano /etc/hosts

# generate ssh key
ssh-keygen

# source env variables
cd /bifrost
. env-vars
. /opt/stack/ansible/hacking/env-setup
cd playbooks/

# bootstap and start services (can be split using skip_bootstrap and
skip_start)
  ansible-playbook - -i /bifrost/playbooks/inventory/localhost
/bifrost/playbooks/install.yaml -e skip_install=true -e 
mysql_service_name=mysql  -e 
"ansible_python_interpreter=/var/lib/kolla/venv/bin/python" -e
network_interface=enp2s0

# at this point ironic is deployed and running check with "ironic node-list" 
should return with no nodes.

# create a yml file discribing your physical nodes ipmi credentials.

e.g. /tmp/servers.yml

---
cloud1:
 uuid: "31303735-3934-4247-3830-333132535336"
   

Re: [openstack-dev] [kolla] [bifrost] bifrost container.

2016-05-10 Thread Mooney, Sean K
eady, and 
reliable.

I'd love to hear from other folks about their journey with bare metal 
deployment with Kolla.

Thx,
britt

From: Mark Casey 
<markca...@pointofrental.com<mailto:markca...@pointofrental.com>>
Reply-To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Monday, May 9, 2016 at 6:48 PM
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [kolla] [bifrost] bifrost container.

I'm not sure if it is necessary to write up or provide support on how to use 
more than one deployment tool, but I think any work that inadvertently makes it 
harder for an operator to use their own existing deployment infrastructure 
could run some people off.

Regarding "deploy a VM to deploy bifrost to deploy bare metal", I suspect that 
situation will not be unique to bifrost. At the moment I'm using MAAS and it 
has a hard dependency on Upstart for init up until around Ubuntu Trusty and 
then was ported to systemd in Wily. I do not think you can just switch to 
another init daemon or run it under supervisord without significant work. I was 
not even able to get the maas package to install during a docker build because 
it couldn't communicate with the init system it wanted. In addition, for any 
deployment tool that enrolls/deploys via PXE the tool may also require 
accommodations when being containerized simply because this whole topic is 
fairly low in the stack of abstractions. For example I'm not sure whether any 
of these tools running in a container would respond to a new bare metal host's 
initial DHCP broadcast without --net=host or similar consideration.

As long as the most common deployment option in Kolla is Ansible, making 
deployment tools pluggable is fairly easy to solve. MAAS and bifrost both have 
inventory scripts that can provide dynamic inventory to kolla-ansible while 
still pulling Kolla's child groups from the multinode inventory file. Another 
common pattern could be for a given deployment tool to template out a new 
(static) multinode inventory and then we just append Kolla's groups to the file 
before calling kolla-ansible. The problem, to me, becomes in getting every 
other option (k8s, puppet, etc.) to work similarly. Perhaps you just state that 
each implementation must be pluggable to various deployment tools and let 
people that know their respective tool handle the how.(?)

Currently I am running MAAS inside a Vagrant box to retain some of the 
immutability and easy "create/destroy" workflow that having it containerized 
would offer. It works very well and, assuming nothing else was running on the 
underlying deployment host, I'd have no issue running it in prod that way even 
with the Vagrant layer.

Thank you,
Mark
On 5/9/2016 4:52 PM, Britt Houser (bhouser) wrote:
Are we (as the Kolla community) open to other bare metal provisioners?  The 
austin discussion was titled generic bare metal, but very quickly turned into 
bifrost-only discourse.  The initial survey showed cobbler/maas/OoO as  
alternatives people use today.  So if the bifrost strategy is, "deploy a VM to 
deploy bifrost to deploy bare metal" and will cleaned up later, then maybe its 
time to take a deeper look at the other deployment tools and see if they are a 
better fit?

Thx,
britt

From: "Steven Dake (stdake)" <std...@cisco.com<mailto:std...@cisco.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Monday, May 9, 2016 at 5:41 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [kolla] [bifrost] bifrost container.



From: Devananda van der Veen 
<devananda@gmail.com<mailto:devananda@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Monday, May 9, 2016 at 1:12 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [kolla] [bifrost] bifrost container.



On Fri, May 6, 2016 at 10:56 AM, Steven Dake (stdake) 
<std...@cisco.com<mailto:std...@cisco.com>> wrote:
Sean,

Thanks for taking this on :)  I didn't know you had such an AR :)

From: "Mooney, Sean K" <sean.k.moo...@intel.com<mailto:sean.k.moo...@intel.com>>
Reply-To: "OpenStack Dev

Re: [openstack-dev] [kolla] [bifrost] bifrost container.

2016-05-09 Thread Mooney, Sean K
Hi

If we choose to use bifrost to deploy ironic standalone I think combining 
kevins previous
suggestion of modifying the bifrost install playbook with Steve Dake's 
suggestion of creating a series
of supervisord configs for running each of the service is a reasonable approch.

I am currently look to scope how much effort would be required to split  the 
main task in the bifrost-ironic-install role
https://github.com/openstack/bifrost/blob/master/playbooks/roles/bifrost-ironic-install/tasks/main.yml
into 3 files which would be included in the main.yml:
Install_componets.yml (executed when skip_install is not defiend)
Bootstrap_components.yml (executed when skip_bootstrap is not defiend)
Start_components.yml  (executed when skip_start is not defiend)
By default all three would be executed maintain the current behavior of bifrost 
today,.

During the kolla build of the biforst image the 
https://github.com/openstack/bifrost/blob/master/playbooks/install.yaml would 
be in
run with skip_bootstrap and skip_start defined as true so only 
Install_componets.yml will be executed by the main task.
This would install all software components of bifrost/ironic without preforming 
configuration or starting the services.

At deployment time during the bootstrap phase we would spawn an instance of the 
biforst-base container and invoke
https://github.com/openstack/bifrost/blob/master/playbooks/install.yaml with 
skip_install and skip_start defined executing Bootstrap_components.yml

Bootstrap_components.yml would encapsulate all logic related to creating the 
ironic db(running migration scripts) and generating the configuration
Files for the biforst components.

Finally in the start phase we have 3 options

a)  Spawn an instance of the bifrost-supervisor container and use 
supervisord to run the bifrost/ironic services (fat container)

b)  Spawn an instance of the bifrost-base container and Invoke 
https://github.com/openstack/bifrost/blob/master/playbooks/install.yaml with
skip_install and skip_bootstrap and allow biforst to star the services.(fat 
container)

c)   Spawn a series of containers each running a single service sharing the 
required volumes to allow them  to communicate (app containers)


I would welcome any input for the bifrost community on this especially related 
to the decomposition of the main.yml into 3 phases.
Im hoping to do a quick poc this week to see how easy it is to do this 
decomposition.

I would also like to call out upfront that depending on the scope of this item 
I may have to withdraw from contributing to it.
I work in intel's network platforms group so enabling baremetal installation is 
somewhat outside the standard
Work our division undertakes.  If we can reuse bifrost to do most of the heavy 
lifting of creating the bifrost container and deploying ironic then
The scope of creating the bifrost container is small enough that I can justify 
spending some of my time working on it. if it requires
Significant changes to bifrost or rework of kolla's ironic support then I will 
have to step back and focus more on feature that are closer aligned to
Our teams core networking and orchestration focus such as enhancing kolla to be 
able to deploy ovs with dpdk  and/or opendaylight which are
Also items I would like to contribute to this cycle. I don't want to commit to 
delivering this feature unless I know I will have the time to work on
It but am happy to help where I can.

@kevin some replies to your questions inline.

Regards
Sean.


From: Fox, Kevin M [mailto:kevin@pnnl.gov]
Sent: Friday, May 6, 2016 9:17 PM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [kolla] [bifrost] bifrost container.

I was under the impression bifrost was 2 things, one, an installer/configurator 
of ironic in a stand alone mode, and two, a management tool for getting 
machines deployed without needing nova using ironic.
[Mooney, Sean K] yes this is correct, bifrost does provide both install 
playbooks for deploying ironic in standalone mode and a series of playbooks for 
dynamically enrolling node in ironic and dynamically deploy imanges to host
Without requiring nova. Bifrost also provides intergration with Disk image 
builder to generate machine images if desired.


The first use case seems like it should just be handled by enhancing kolla's 
ironic container stuff to directly to handle the use case, doing things the 
kolla way. This seems much cleaner to me. Doing it at runtime looses most of 
the benefits of doing it in a container at all.
[Mooney, Sean K] I was not suggestiong doing the installation at runtime. 
Option 2 and 3   suggested spawning a container as part of the build in which 
the install playbook would be run.
That container would then be stopped and exported to form the base image for 
the bifrost continer(s). The base image (bifrost-postinstall)  would either be 
use to create  a fat cont

Re: [openstack-dev] [kolla] [bifrost] bifrost container.

2016-05-06 Thread Mooney, Sean K


From: Steven Dake (stdake) [mailto:std...@cisco.com]
Sent: Friday, May 6, 2016 6:56 PM
To: OpenStack Development Mailing List (not for usage questions) 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [kolla] [bifrost] bifrost container.

Sean,

Thanks for taking this on :)  I didn't know you had such an AR :)
[Mooney, Sean K] well if other want to do the work that ok with me too but I 
was planning on deploying bifrost
At home again anyway so I taught I  might as well try to automate the process 
while im at it.

From: "Mooney, Sean K" <sean.k.moo...@intel.com<mailto:sean.k.moo...@intel.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Friday, May 6, 2016 at 10:14 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [kolla] [bifrost] bifrost container.

Hi everyone.

Following up on my AR from the kolla host repository session
https://etherpad.openstack.org/p/kolla-newton-summit-kolla-kolla-host-repo
I started working on creating a kolla bifrost container.

Are some initial success it have hit a roadblock with the current install 
playbook provided by bifrost.
In particular the install playbook both installs the ironic dependencies and 
configure and runs the services.


What I'd do here is ignore the install playbook and duplicate what it installs. 
 We don't want to install at run time, we want to install at build time.  You 
weren't clear if that is what your doing.
[Mooney, Sean K] that is certainly an option but bifrost is an installer for 
ironic and its supporting service. Not using its installation scripts 
significantly reduces the value of
Integrating with bifrost vs fixing the existing ironic support in kolla and 
using that to provision the undercloud.

The reason we would ignore the install playbook is because it runs the 
services.  We need to run the services in a different way.  This will (as we 
discussed at ODS) be a fat container on the underlord cloud - which I guess is 
ok.  I'd recommend not using systemd, as that will break systemd systems badly. 
 Instead use a different init system, such as supervisord.
[Mooney, Sean K] if we don't use the  bifrost install playbook then yes 
supervisord would be a good choice for the init system.
Looking at the official centos docker image https://hub.docker.com/_/centos/  
they do provided instruction for running systemd containers tough I have had 
issues with this in the past.

The installation of ironic and its dependencies would not be a problem but the 
ansible service module is not cable able of starting the
Infrastructure services (mysql,rabbit ...) without a running init system which 
is not present during the docker build.

When I created a biforst container in the past is spawned a Ubuntu upstart 
container then docker exec into the container and ran
Bifrost install script. This works because the init system is running and the 
service module could test and start the relevant services.


This leave me with 3 paths forward.


1.   I can continue to try and make the bifrost install script work with 
the kolla build system by using sed to modify the install playbook or try start 
systemd during the docker build.

2.   I can use the kolla build system to build only part of the image

a.the bifrost-base image would be build with the kolla build system 
without running the bifrost playbook. This
would allow the existing allow the existing features of the build system such 
as adding headers/footers to be used.

b.  After the base image is built by kolla I can spawn an instance of 
bifrost-base with systemd running

c.   I can then connect to this running container and run the bifrost 
install script unmodified.

d.  Once it is finished I can stop the container and export it to an image 
"bifros-postinstall".

e.  This can either be used directly (fat container) or as the base image 
for other container that run each of the ironic services (thin containers)

3.   I can  skip the kolla build system entirely and create a 
script/playbook that will build the bifrost container similar to 2.

4.
Make a supervisord set of init scripts and make the docker file do what it was 
intended - install the files.  This is kind of a mashup of your 1-3 ideas.  
Good thinking :)


While option 1 would fully use the kolla build system It is my least favorite 
as it is both hacky and complicated to make work.
Docker really was not designed to run systemd as part of docker build.

For option 2 and 3 I can provide a single playbook/script that will fully 
automate the build but the real question I have
Is should I use the kolla build system to make the base image or not.

If anyone else has sugges

[openstack-dev] [kolla] [bifrost] bifrost container.

2016-05-06 Thread Mooney, Sean K
Hi everyone.

Following up on my AR from the kolla host repository session
https://etherpad.openstack.org/p/kolla-newton-summit-kolla-kolla-host-repo
I started working on creating a kolla bifrost container.

Are some initial success it have hit a roadblock with the current install 
playbook provided by bifrost.
In particular the install playbook both installs the ironic dependencies and 
configure and runs the services.

The installation of ironic and its dependencies would not be a problem but the 
ansible service module is not cable able of starting the
Infrastructure services (mysql,rabbit ...) without a running init system which 
is not present during the docker build.

When I created a biforst container in the past is spawned a Ubuntu upstart 
container then docker exec into the container and ran
Bifrost install script. This works because the init system is running and the 
service module could test and start the relevant services.


This leave me with 3 paths forward.


1.   I can continue to try and make the bifrost install script work with 
the kolla build system by using sed to modify the install playbook or try start 
systemd during the docker build.

2.   I can use the kolla build system to build only part of the image

a.the bifrost-base image would be build with the kolla build system 
without running the bifrost playbook. This
would allow the existing allow the existing features of the build system such 
as adding headers/footers to be used.

b.  After the base image is built by kolla I can spawn an instance of 
bifrost-base with systemd running

c.   I can then connect to this running container and run the bifrost 
install script unmodified.

d.  Once it is finished I can stop the container and export it to an image 
"bifros-postinstall".

e.  This can either be used directly (fat container) or as the base image 
for other container that run each of the ironic services (thin containers)

3.   I can  skip the kolla build system entirely and create a 
script/playbook that will build the bifrost container similar to 2.


While option 1 would fully use the kolla build system It is my least favorite 
as it is both hacky and complicated to make work.
Docker really was not designed to run systemd as part of docker build.

For option 2 and 3 I can provide a single playbook/script that will fully 
automate the build but the real question I have
Is should I use the kolla build system to make the base image or not.

If anyone else has suggestion on how I can progress  please let me know but 
currently I am leaning towards option 2.

The only other option I see would be to not use a container and either install 
biforst on the host or in a vm.
These would essentially be a no op for kolla as we would simply have to 
document how to install bifrost which is covered
Quite well as part of the bifrost project.

Regards
Sean.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] dvr with ovs-dpdk.

2016-04-29 Thread Mooney, Sean K
Hi
If any of the dvr team are around I would love to
Meet with ye to discuss how to make dvr work efficiently when using ovs with 
the dpdk datapath.
Ill be around the Hilton all day and am currently in room 400 but if anyone 
wants to meetup to
Discuss this let me know

Regards
Seán

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] openstack client slowness / client-as-a-service

2016-04-19 Thread Mooney, Sean K

> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Tuesday, April 19, 2016 8:10 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [devstack] openstack client slowness /
> client-as-a-service
> 
> On 04/19/2016 02:17 PM, Monty Taylor wrote:
> > Rather than ditching python for something like go, I'd rather put
> > together a CLI with no plugins and that only depended on keystoneauth
> > and os-client-config as libraries. No?
[Mooney, Sean K] that is similar to how shade works correct. There
Are no plugins all services are part of the core project. 

It might be a little out there but I was playing with http://nuitka.net/ at the 
weekend.
It's a python compiler that uses libpython to generate an AST that gets 
converted
To cpp and compile with the compiler of your choice to produce a binary.

I used clang and was able to compile the nova compute agent, the nova scheduler
and there oslo dependencies without any code modification. I haven't had a 
change
to use rally to see if it actually improved the performance but I was perfectly 
functional.

I was thinking of creating a tox env that would read the console_scripts entry 
in the setup.cfg
and generate a binary for each by compiling it with nuitka as a follow up but
If you want to still code in python but improve the speed then compiling it 
would
Also be worth considering before moving to something like go, especially since 
nuitka allows
You to mike compiled and uncompiled code.

> 
> Bingo.
> 
> -jay
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][nova][stable][sr-iov] Status of physical_device_mappings

2016-03-24 Thread Mooney, Sean K


> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Wednesday, March 23, 2016 8:01 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [neutron][nova][stable][sr-iov] Status of
> physical_device_mappings
> 
> +tags for stable and nova
> 
> Hi Vladimir, comments inline. :)
> 
> On 03/21/2016 05:16 AM, Vladimir Eremin wrote:
> > Hey OpenStackers,
> >
> > I've recently found out, that changing of use neutron sriov-agent in
> Mitaka from optional to required[1] makes a kind of regression.
> 
> While I understand that it is important for you to be able to associate
> more than one NIC to a physical network, I see no evidence that there
> was a *regression* in Mitaka. I don't see any ability to specify more
> than one NIC for a physical network in the Liberty Neutron SR-IOV ML2
> agent:
> 
> https://github.com/openstack/neutron/blob/stable/liberty/neutron/common/
> utils.py#L223-L225
> 
> > Before Mitaka, there was possible to use any number of NICs with one
> Neutron physnet just by specifying pci_passthrough_whitelist in nova:
> >
> >  [default]
> >  pci_passthrough_whitelist = { "devname": "eth3",
> > "physical_network": "physnet2"},{ "devname": "eth4",
> > "physical_network": "physnet2"},
> >
> > which means, that eth3 and eth4 will be used for physnet2 in some
> manner.
> 
> Yes, *in Nova*, however from what I can tell, this functionality never
> existed in the parse_mappings() function in neutron.common.utils module.
> 
> > In Mitaka, there also required to setup neutron sriov-agent as well:
> >
> >  [sriov_nic]
> >  physical_device_mappings = physnet2:eth3
> >
> > The problem actually is to unable to specify this mapping as
> "physnet2:eth3,physnet2:eth4" due to implementation details, so it is
> clearly a regression.
> 
> A regression means that a change broke some previously-working
> functionality. This is not a regression, since there apparently was
> never such functionality in Neutron.
This may have worked in the past if yo did not use the neutron sriovnic agent.
In liberty it was optional and not used with intel nics but in mitaka it is now 
required.
I do not have a liberty system to hand to test but perhaps that is how it 
worked(assuming it did work) 
In liberty but not in mitaka?
> 
> > I've filed bug[2] for it and proposed a patch[3]. Originally
> physical_device_mappings is converted to dict, where physnet name goes
> to key, and interface name to value:
> >
> >  >>> parse_mappings('physnet2:eth3')
> >  {'physnet2': 'eth3'}
> >  >>> parse_mappings('physnet2:eth3,physnet2:eth4')
> >  ValueError: Key physnet2 in mapping: 'physnet2:eth4' not unique
> >
> > I've changed it a bit, so interface name is stored in list, so now
> this case is working:
> >
> >  >>> parse_mappings_multi('physnet2:eth3,physnet2:eth4')
> >  {'physnet2': ['eth3', 'eth4']}
> >
> > I'd like to see this fix[3] in master and Mitaka branch.
> 
> I understand you really want this functionality in Mitaka. And I will
> leave it up to the stable team to determine whether this code should be
> backported to stable/mitaka. However, I will point out that this is a
> new feature, not a bug fix for a regression. There is no regression
> because the ability for Neutron to use more than one NIC with a physnet
> was never supported as far as I can tell.
> 
> Best,
> -jay
> 
> > Moshe Levi also proposed to refactor this part of code to remove
> physical_device_mappings and reuse data that nova provides somehow. I'll
> file the RFE as soon as I figure out how it should work.
> >
> > [1]:
> > http://docs.openstack.org/liberty/networking-guide/adv_config_sriov.ht
> > ml
> > [2]: https://bugs.launchpad.net/neutron/+bug/1558626
> > [3]: https://review.openstack.org/294188
> >
> > --
> > With best regards,
> > Vladimir Eremin,
> > Fuel Deployment Engineer,
> > Mirantis, Inc.
> >
> >
> >
> >
> >
> > __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] stuck review, mtu incorrectly set on vhost-user port prevents vm booting.

2016-03-11 Thread Mooney, Sean K
Hi everyone
I opened a bug back in January to correct an issue with vhost-user
Related to  setting the mtu on the interface.

The recent changes in nova and neutron to change the default mtu value from 0 
to 1500
Are a direct trigger for this bug resulting in an inability to boot vms that 
use vhost-user.

If anyone on the nova core/stable teams has time to look at this  bug and the 
fix I would appreciated it.

I would really like get this merged before rc1 is tagged if possible.
We have been running it in our ci for two weeks to make our cis work but I want
To get them back to running against the head of master nova as soon as possible.

Bug : https://bugs.launchpad.net/nova/+bug/1533876
Review: https://review.openstack.org/#/c/271444/

Liberty backport: https://review.openstack.org/#/c/289370/1
Kilo backport: https://review.openstack.org/#/c/289374/1

Regards
Seán

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] adding ovs dpdk agent into neutron

2016-03-03 Thread Mooney, Sean K


> -Original Message-
> From: Vladimir Eremin [mailto:vere...@mirantis.com]
> Sent: Wednesday, March 2, 2016 8:49 PM
> To: OpenStack Development Mailing List (not for usage questions)
> <openstack-dev@lists.openstack.org>
> Cc: Emilien Macchi <emil...@redhat.com>; m...@mattfischer.com; Mooney,
> Sean K <sean.k.moo...@intel.com>
> Subject: Re: [openstack-dev] [puppet] adding ovs dpdk agent into neutron
> 
> Hi MichalX, Sean,
> 
> Building from sources is possible, but it will be more stable, if you
> will use packaging system from the OS. Also, it will be really good if
> your module make changes to OpenStack configuration files using puppet-
> nova and puppet-neutron, and it could be split for compute/agent and
> scheduler changes.

So just to clarify we do intend to add support form installing for distro 
packages.

When that is added that will be our default but we want source support as 1 
distros have not
Released stable version of the ovs with dpdk package yet and 2 they may be 
missing an upstream
Feature in ovs that requires a newer commit. Source build gives that 
flexibility but not
The stability of distro certified packages.

I can certainly see splitting the logic for example configuration of hugepages 
could be 
Extracted from this puppet module and added to puppet-nova as Libvirt should be 
configured
To enable hugepages. similarly the modification of the vcpu pinset in nova to 
removed
The cores dedicated to ovs with dpdk could resided there but I was not sure if 
puppet-nova
Would like to have logic specific to a particular network backend.

Really the core of what I saw our module doing was:
- Installing ovs (from source or distro package).
- creating and starting ovsdb
- Creating the ovs bridges with the netdev datapath
- Binding the physical interfaces to the dpdk driver
- attaching the dpdk interfaces to the correct ovs bridge
- starting ovs-vswitchd process

Ancillary to that it currently make minor modification to 
/etc/neutron/plugins/ml2_conf.ini
To set the datapath for the ovs neutron agent, and as I said before it modify 
the vcpu_pinset
In /etc/nova/nova.conf to remove the cpus dedicated to ovs with dpdk form those 
available to
Vms.


> 
> I will really glad to see modular, reusable solution that could be
> integrated with our implementation in fuel-library[1].
> 
> [1]: https://review.openstack.org/#/q/topic:bp/support-
> dpdk+project:openstack/fuel-library,n,z
> 
> --
> With best regards,
> Vladimir Eremin,
> Fuel Deployment Engineer,
> Mirantis, Inc.
> 
> 
> 
> > On Mar 2, 2016, at 10:48 PM, Ptacek, MichalX
> <michalx.pta...@intel.com> wrote:
> >
> > Thanks Emilien,
> > It's becoming more clear to me what has to be done.
> > Did I get it correctly that using bash code inside puppet module is
> "nish nish" and will NOT be accepted by the community ?
> > (even if we move the logic into own module like openstack/ovs-dpdk)
> > Additionally building from the src or using own packages from such
> builds is also not possible in such modules even despite its performance
> or other functional benefits ?
> >
> > best regards,
> > Michal
> >
> > -Original Message-
> > From: Emilien Macchi [mailto:emil...@redhat.com]
> > Sent: Wednesday, March 02, 2016 6:51 PM
> > To: Ptacek, MichalX <michalx.pta...@intel.com>; 'OpenStack Development
> Mailing List (not for usage questions)'  d...@lists.openstack.org>; m...@mattfischer.com
> > Cc: Mooney, Sean K <sean.k.moo...@intel.com>; Czesnowicz, Przemyslaw
> <przemyslaw.czesnow...@intel.com>
> > Subject: Re: [openstack-dev] [puppet] adding ovs dpdk agent into
> neutron
> >
> >
> >
> > On 03/02/2016 03:07 AM, Ptacek, MichalX wrote:
> >> Hi all,
> >>
> >>
> >>
> >> we have puppet module for ovs deployments with dpdk support
> >>
> >> https://github.com/openstack/networking-ovs-dpdk/tree/master/puppet
> >
> > IMHO that's a bad idea to use networking-ovs-dpdk for the puppet
> module.
> > You should initiate the work to create openstack/puppet-dpdk (not sure
> about the name) or try to patch openstack/puppet-vswitch.
> >
> > How puppet-vswitch would be different from puppet-dpdk?
> >
> > I've looked at the code and you run bash scripts from Puppet.
> > Really ? :-)
> >
> >> and we would like to adapt it in a way that it can be used within
> >> upstream neutron module
> >>
> >> e.g. to introduce class like this
> >>
> >> neutron::agents::ml2::ovsdpdk
> >>
> >>
> >>
> >> Current code works as f

Re: [openstack-dev] [puppet] adding ovs dpdk agent into neutron

2016-03-03 Thread Mooney, Sean K


> -Original Message-
> From: Emilien Macchi [mailto:emil...@redhat.com]
> Sent: Wednesday, March 2, 2016 8:33 PM
> To: Ptacek, MichalX <michalx.pta...@intel.com>; 'OpenStack Development
> Mailing List (not for usage questions)'  d...@lists.openstack.org>; m...@mattfischer.com
> Cc: Mooney, Sean K <sean.k.moo...@intel.com>; Czesnowicz, Przemyslaw
> <przemyslaw.czesnow...@intel.com>
> Subject: Re: [openstack-dev] [puppet] adding ovs dpdk agent into neutron
> 
> 
> 
> On 03/02/2016 02:48 PM, Ptacek, MichalX wrote:
> > Thanks Emilien,
> > It's becoming more clear to me what has to be done.
> > Did I get it correctly that using bash code inside puppet module is
> "nish nish" and will NOT be accepted by the community ?
> 
> It's really bad practice in my opinion.

We use bash as the puppet module has evolved from our existing devstack plugin.
We are not a ruby house so we have resisted converting it until we had 
A valid business or costumer usecase. If it is required for upstreaming
Then that’s a good business/costumer requirement and it can be ported but
Our background is in c/python/bash so neither ruby or puppet are technologies we
Use frequently hence why we are reaching out not to understand how this should 
work
If it was to be upstreamed.
 
> > (even if we move the logic into own module like openstack/ovs-dpdk)
> > Additionally building from the src or using own packages from such
> builds is also not possible in such modules even despite its performance
> or other functional benefits ?
> 
> We like things done upstream, if networking-ovs-dpdk is part of
> OpenStack, let's package it (and its dependencies) upstream too.
> 
> Do we have any blocker on that?

networking-ovs-dpdk is packaged on pip

debian and Ubuntu ship our kilo release in the distro.
I have reached out to redhat in the past to packaged it
But currently there has been no traction there.

I have not made a liberty release yet( I was waiting for a neutron patch to be 
back ported after
Another backport broke our security group driver)
But I hope to tag and release the 2.0.0 version to pip/pypi this month.

The 3.0.0 release for mitaka should be made in April around the time of the 
summit.


It would be nice to see the osp 9?  Or the Ubuntu cloud archive for mitaka pick 
up
The 3.0.0 release but I have little control over distro packaging but it will be
Available on pypi.


> 
> 
> > best regards,
> > Michal
> >
> > -Original Message-
> > From: Emilien Macchi [mailto:emil...@redhat.com]
> > Sent: Wednesday, March 02, 2016 6:51 PM
> > To: Ptacek, MichalX <michalx.pta...@intel.com>; 'OpenStack Development
> > Mailing List (not for usage questions)'
> > <openstack-dev@lists.openstack.org>; m...@mattfischer.com
> > Cc: Mooney, Sean K <sean.k.moo...@intel.com>; Czesnowicz, Przemyslaw
> > <przemyslaw.czesnow...@intel.com>
> > Subject: Re: [openstack-dev] [puppet] adding ovs dpdk agent into
> > neutron
> >
> >
> >
> > On 03/02/2016 03:07 AM, Ptacek, MichalX wrote:
> >> Hi all,
> >>
> >>
> >>
> >> we have puppet module for ovs deployments with dpdk support
> >>
> >> https://github.com/openstack/networking-ovs-dpdk/tree/master/puppet
> >
> > IMHO that's a bad idea to use networking-ovs-dpdk for the puppet
> module.
> > You should initiate the work to create openstack/puppet-dpdk (not sure
> about the name) or try to patch openstack/puppet-vswitch.
> >
> > How puppet-vswitch would be different from puppet-dpdk?
> >
> > I've looked at the code and you run bash scripts from Puppet.
> > Really ? :-)
> >
> >> and we would like to adapt it in a way that it can be used within
> >> upstream neutron module
> >>
> >> e.g. to introduce class like this
> >>
> >> neutron::agents::ml2::ovsdpdk
> >>
> >>
> >>
> >> Current code works as follows:
> >>
> >> -  Openstack with installed vanilla ovs is a kind of
> precondition
> >>
> >> -  Ovsdpdk puppet module installation is triggered afterwards
> >> and it replace vanilla ovs by ovsdpdk
> >>
> >> (in order to have some flexibility and mostly due to performance
> >> reasons we are building ovs from src code)
> >>
> >> https://github.com/openstack/networking-ovs-dpdk/blob/master/puppet/o
> >> v
> >> sdpdk/files/build_ovs_dpdk.erb
> >>
> >> -  As a part of deployments we have several shell scripts,
> which
> >> are taking care of build and configuration stuff
> >>

Re: [openstack-dev] [nova] solving API case sensitivity issues

2016-02-24 Thread Mooney, Sean K


> -Original Message-
> From: James Bottomley [mailto:james.bottom...@hansenpartnership.com]
> Sent: Wednesday, February 24, 2016 5:46 PM
> To: Sean Dague ; openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] solving API case sensitivity issues
> 
> On Wed, 2016-02-24 at 11:40 -0500, Sean Dague wrote:
> > On 02/24/2016 11:28 AM, James Bottomley wrote:
> > > On Wed, 2016-02-24 at 07:48 -0500, Sean Dague wrote:
> > > > We have a specific bug around aggregrate metadata setting in Nova
> > > > which exposes a larger issue with our mysql schema.
> > > > https://bugs.launchpad.net/nova/+bug/1538011
> > > >
> > > > On mysql the following will explode with a 500:
> > > >
> > > > > nova aggregate-create agg1
> > > > > nova aggregate-set-metadata agg1 abc=1 nova
> > > > > aggregate-set-metadata agg1 ABC=2
> > > >
> > > > mysql (by default) treats abc == ABC. However the python code does
> > > > not.
Personally I would argue that the python code is correct
and they should not be considered the same. ABC and abc are two different keys
in the aggregate metadata and I do not think it is correct to treat them the 
same.
Assuming the above commands I would expect the metadata to contain  two key 
pairs [abc=1,ABC=2]

> > > >
> > > > We have a couple of options:
> > > >
> > > > 1) make the API explicitly case fold
> > > >
> > > > 2) update the mysql DB to use latin_bin collation for these
> > > > columns
This should not be latin_bin as Unicode is allowed in URLs this should really 
be utf8_bin
> > > >
> > > > 3) make this a 400 error because duplicates were found
> > > >
> > > >
> > > > Options 1 & 2 make all OpenStack environments consistent
> > > > regardless of backend.
> > > >
> > > > Option 2 is potentially expensive TABLE alter.
> > > >
> > > > Option 3 gets rid of the 500 error, however at the risk that the
> > > > behavior for this API is different depending on DB backend. Which
> > > > is less than ideal.
> > > >
> > > >
> > > > My preference is slightly towards #1. It's taken a long time for
> > > > someone to report this issue, so I think it's an edge case, and
> > > > people weren't think about this being case sensitive. It has the
> > > > risk of impacting someone on an odd db platform that has been
> > > > using that feature.
> > > >
> > > > There are going to be a few other APIs to clean up in a similar
> > > > way.
> > > > I don't think this comes in under a microversion because of how
> > > > deep in the db api layer this is, and it's just not viable to keep
> > > > both paths.
> > >
> > > This is actually one of the curses wished on us by REST.  Since the
> > > intent is to use web requests for the API, the API name must follow
> > > the case sensitivity rules for URL matching (case insensitive).
> >
> > Um... since when are URLs generically case insensitive? The host
> > portion is - https://tools.ietf.org/html/r
> > https://tools.ietf.org/html/rfc3986#section-3.2.2c3986#section-3.2.2
> > and the scheme
> > portion is - https://tools.ietf.org/html/rfc3986#section-3.1 however
> > nothing about the PATH specifies that it should or must be (in section
> > 3.3)
I would agree with this we should not assume that URLs are case sensitive nor 
should
We assume they are ascii. I would be in favor of option 2 with utf8_bin as this 
support
Both Unicode and case sensitivity.
> 
> Heh, OK, I'm out of date.  When we first argued over this, Microsoft
> required case insensitive matching for the path component because IIS
> was doing lookups on vfat filesystems which are naturally case
> insensitive.  If that's been excised from the standard, I'm happy to
> keep it in the dustbin of history.
> 
> > While it's a little off topic, this is the 2nd time in a month it came
> > up, so I'd like to know if there is a reference for the case
> > insensitive pov.
> 
> I checked; it looks to be implementation specific.  So php, for
> instance, does case sensitive
> 
> /index.php != /Index.php
> 
> But drupal does case insensitive
> 
> /node/6 == /Node/6 == /NoDe/6
> 
> So all in all, a bit of a mess.
> 
> James
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Shovel (RackHD/OpenStack)

2016-01-13 Thread Mooney, Sean K


> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: Wednesday, January 13, 2016 8:53 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] Shovel (RackHD/OpenStack)
> 
> On 01/13/2016 03:28 PM, Keedy, Andre wrote:
> > Hi All, I'm pleased to announce a new application called 'Shovel 'that
> > is now available in a public repository on GitHub
> > (https://github.com/keedya/Shovel).  Shovel is a server with a set of
> > APIs that wraps around RackHD/Ironic's existing APIs allowing users to
> > find Baremetal Compute nodes that are dynamically discovered by RackHD
> > and register them with Ironic. Shovel also uses the SEL pollers
> > service in RackHD to monitor compute nodes and logs errors from SEL
> > into the Ironic Database.  Shovel includes a graphical interface using
> Swagger UI.
> >
> > Also provided is a Shovel Horizon plugin to interface with the Shovel
> > service that is available in a public repository on GitHub
> > (https://github.com/keedya/shovel-horizon-plugin). The Plugin adds a
> > new Panel to the admin Dashboard called rackhd that displays a table
> > of all the Baremetal systems discovered by RackHD. It also allows the
> > user to see the node catalog in a nice table view, register/unregister
> > node in Ironic, display node SEL and enable/register a failover node.
> >
> > I invite you to take a look at Shovel and Shovel horizon plugin that
> > is available to the public on GitHub.
> 
> Would EMC be interested in contributing to the OpenStack Ironic project
> around hardware discovery and automated registration of hardware? It
> would be nice to have a single community pulling in the same direction.
> It looks to me that RackHD is only a few months old. Was there a
> particular reason that EMC decided to start a new open source project
> for doing hardware management instead of contributing to the OpenStack
> Ironic project?
> 
> It was a bit surprising to me actually, to see Joe Heck, who used to be
> a very active contributor in OpenStack, started the RackHD project.
> 
> Also, just FYI, "Shovel" is a RabbitMQ thing:
> 
> https://www.rabbitmq.com/shovel.html
> 
> Might be worth looking into a rename of your project to avoid confusion,
> but that's just a suggestion.
Its also a python library for converting function into tasks invokable
>From the commandline however it has not had a release in the past year so
Development may not be ongoing.

https://github.com/seomoz/shovel
https://pypi.python.org/pypi/shovel

> 
> Best,
> -jay
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron][os-vif] os-vif core review team membership

2016-01-12 Thread Mooney, Sean K
> -Original Message-
> From: Moshe Levi [mailto:mosh...@mellanox.com]
> Sent: Tuesday, January 12, 2016 4:23 PM
> To: Russell Bryant; Daniel P. Berrange; openstack-
> d...@lists.openstack.org
> Cc: Jay Pipes; Mooney, Sean K; Sahid Orentino Ferdjaoui; Maxime Leroy
> Subject: RE: [nova][neutron][os-vif] os-vif core review team membership
> 
> 
> 
> > -Original Message-
> > From: Russell Bryant [mailto:rbry...@redhat.com]
> > Sent: Tuesday, January 12, 2016 5:24 PM
> > To: Daniel P. Berrange <berra...@redhat.com>; openstack-
> > d...@lists.openstack.org
> > Cc: Jay Pipes <jaypi...@gmail.com>; Sean Mooney
> > <sean.k.moo...@intel.com>; Moshe Levi <mosh...@mellanox.com>; Sahid
> > Orentino Ferdjaoui <sahid.ferdja...@redhat.com>; Maxime Leroy
> > <maxime.le...@6wind.com>
> > Subject: Re: [nova][neutron][os-vif] os-vif core review team
> > membership
> >
> > On 01/12/2016 10:15 AM, Daniel P. Berrange wrote:
> > > So far myself & Jay Pipes have been working on the initial os-vif
> > > prototype and setting up infrastructure for the project. Obviously
> > > we need more then just 2 people on a core team, and after looking at
> > > those who've expressed interest in os-vif, we came up with a
> > > cross-section of contributors across the Nova, Neutron and NFV
> > > spaces to be the initial core team:
> > >
> > >   Jay Pipes
> > >   Daniel Berrange
> > >   Sean Mooney
> > >   Moshe Levi
> > >   Russell Bryant
> > >   Sahid Ferdjaoui
> > >   Maxime Leroy
> > >
> > > So unless anyone wishes to decline the offer, once infra actually
> > > add me to the os-vif-core team I'll be making these people os-vif
> > > core, so we can move forward with the work on the library...
> >
> > Thanks, I'm happy to help.
> Same here.
I would be happy to help work on moving os-vif forward in whatever way I can.
Thank you for the invitation. I will not be able to travel to the nova midcycle 
to discuss 
os-vif however one of the engineers I work with (sfinucan) will be there. 
Will the status of the oslo.privsep work be discussed?
Regards 
sean
> >
> > --
> > Russell Bryant
On a side note:


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovs-dpdk] availability

2016-01-03 Thread Mooney, Sean K
Hi,
it is possible to install and configure networking-ovs-dpdk without 
Using the devstack plugin however we currently do not have documentation for 
this process.
The openstack configuration required to enable the ovs agent and ml2 driver 
provided by networking-ovs-dpdk
Is minimal. The compilation/installation/configuration and deamonisation of ovs 
with dpdk is significantly more involved
Which was the motivation for creating the devstack plugin originally. 

If a manual install guide is something you would like the best way to provide 
and track
Feature request is to simply open a bug 
https://bugs.launchpad.net/networking-ovs-dpdk 

A manual install guide is something we are considering adding  but we are 
currently prioritizing  adding support for deploying 
Via puppet. We are also working with others  in opnfv to integrate our puppet 
module based install with fuel via a new fuel plugin.
This work is aiming to support a liberty based install.

Regarding packaging one of the items I had hoped to do before the holiday break 
was
Releases the current tags of the networking-ovs-dpdk repo to pypi.

Before publishing to pypi I will be changing our tagging scheme form 
Openstack older date based versions to the newer semantic scheme.
I hope to publish the packages to pypi later this week and will also be
Tagging new releases of both the kilo and liberty stable branches in the next 
2-4 weeks.

In the new format the 1.X releases will correspond to kilo (2015.1 -> 1.0, 
2015.1.1 ->1.1)
2.X will correspond to liberty 
3.X will correspond to mitaka.

It should be noted that currently with mitaka the networking-ovs-dpdk is only 
require to provided
Security group support. Base support for booting vms with vhost-user and ovs 
with dpdk  has been merged into
Upstream neutron as part of the standard neutron ovs agent and ml2 driver. 
 
We hope to contribute the openflow security group driver to neutron before 
mitaka-2 for review.
If that is accepted in the mitaka cycle the networking-ovs-dpdk will only 
contain deployment code
As all runtime features will be upstream.

Regards
sean

-Original Message-
From: Martinx - ジェームズ [mailto:thiagocmarti...@gmail.com] 
Sent: Monday, January 4, 2016 2:00 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [networking-ovs-dpdk] availability

On 3 January 2016 at 23:41, Palanisamy, Prabhakaran (Contractor) 
 wrote:
> Hi,
>
>
>
> Is networking-ovs-dpdk package available non devstack installation ??
>
>
>
> Thanks,
>
> PP

Hi,

Don't know if it will help you but, Ubuntu Xenial (development branch), have 
interesting packages:

root@xenial-1:~# apt-cache search ovs dpdk openvswitch-switch-dpdk - DPDK 
enabled Open vSwitch switch implementation python-networking-ovs-dpdk - 
OpenStack virtual network service - Open vSwitch DPDK ML2 mechanism driver

Also, Xenial will support DPDK 2.2 natively, for at least, 5 years.

I'm trying to experiment Mitaka beta release on Xenial these days...

Best,
Thiago

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovs-dpdk] VM creation fails with Unexpected vif_type=binding_failed

2015-11-27 Thread Mooney, Sean K
For kilo we provided a single node all in one example config here
https://github.com/openstack/networking-ovs-dpdk/blob/stable/kilo/doc/source/_downloads/local.conf_example

I have modified that to be a controller with the interfaces and ips form your 
controller local.conf.
I do not have any kilo compute local.conf to hand but I modified an old compute 
local.conf to so that it should work
Using the ip and interface settings from your compute local.conf.


Regards
Sean.

From: Praveen MANKARA RADHAKRISHNAN [mailto:praveen.mank...@6wind.com]
Sent: Friday, November 27, 2015 9:21 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-ovs-dpdk] VM creation fails with 
Unexpected vif_type=binding_failed

Hi Sean,

I have changed the hostname in both machines.
and tried again still i have the same error.

I am trying to configure ovs-dpdk with vlan now.
For the kilo version the getting started guide was missing in the repository.
But i have changed the repositories everywhere to kilo.

Please find the attached loal.conf for compute and controller.

one change i have made is i have added ml2 plusgin as vlan for compute config 
also.
because if i exactly use the local.confs as in example the controller was vlan 
and compute is taking as vxlan for the ml2 config.

And please find all the errors present in the compute and controller.

Thanks
Praveen

On Thu, Nov 26, 2015 at 5:58 PM, Mooney, Sean K 
<sean.k.moo...@intel.com<mailto:sean.k.moo...@intel.com>> wrote:
Openstack uses the hostname as a primary key in many of the project.
Nova and neutron both do this.
If you had two nodes with the same host name then it would cause undefined 
behavior.

Based on the error Andreas highlighted  are you currently trying to configure 
ovs-dpdk with vxlan/gre?

I also noticed that the getting started guide you linked to earlier was for the 
master branch(mitaka) but
You mentioned you were deploying kilo.
The local.conf settings will be different in both case.





-Original Message-
From: Andreas Scheuring 
[mailto:scheu...@linux.vnet.ibm.com<mailto:scheu...@linux.vnet.ibm.com>]
Sent: Thursday, November 26, 2015 1:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-ovs-dpdk] VM creation fails with 
Unexpected vif_type=binding_failed

Praveen,
there are many error in your q-svc log.
It says:

InvalidInput: Invalid input for operation: (u'Tunnel IP %(ip)s in use with host 
%(host)s', {'ip': u'10.81.1.150', 'host':
u'localhost.localdomain'}).\n"]


Did you maybe specify duplicated ips in your controllers and compute nodes 
neutron tunnel config?

Or did you change the hostname after installation

Or maybe the code has trouble with duplicated host names?

--
Andreas
(IRC: scheuran)



On Di, 2015-11-24 at 15:28 +0100, Praveen MANKARA RADHAKRISHNAN wrote:
> Hi Sean,
>
>
> Thanks for the reply.
>
>
> Please find the logs attached.
> ovs-dpdk is correctly running in compute.
>
>
> Thanks
> Praveen
>
> On Tue, Nov 24, 2015 at 3:04 PM, Mooney, Sean K
> <sean.k.moo...@intel.com<mailto:sean.k.moo...@intel.com>> wrote:
> Hi would you be able to attach the
>
> n-cpu log form the computenode  and  the
>
> n-sch and q-svc logs for the controller so we can see if there
> is a stack trace relating to the
>
> vm boot.
>
>
>
> Also can you confirm ovs-dpdk is running correctly on the
> compute node by running
>
> sudo service ovs-dpdk status
>
>
>
> the neutron and networking-ovs-dpdk commits are from their
> respective stable/kilo branches so they should be compatible
>
> provided no breaking changes have been merged to either
> branch.
>
>
>
> regards
>
> sean.
>
>
>
> From: Praveen MANKARA RADHAKRISHNAN
> [mailto:praveen.mank...@6wind.com<mailto:praveen.mank...@6wind.com>]
> Sent: Tuesday, November 24, 2015 1:39 PM
> To: OpenStack Development Mailing List (not for usage
> questions)
> Subject: Re: [openstack-dev] [networking-ovs-dpdk] VM creation
> fails with Unexpected vif_type=binding_failed
>
>
>
> Hi Przemek,
>
>
>
>
> Thanks For the response,
>
>
>
>
>
> Here are the commit ids for Neutron and networking-ovs-dpdk
>
>
>
>
>
> [stack@localhost neutron]$ git log --format="%H" -n 1
>
>
> 026bfc6421da796075f71a9ad4378674f619193d
>
>
> [stack@localhost neutron]$ cd ..
>
>
> [stack@localhost ~]$ cd networking-ovs-dpdk/
>
>
> [stack@localhost networking-ovs-dpdk]$  git log --format=

Re: [openstack-dev] [networking-ovs-dpdk] VM creation fails with Unexpected vif_type=binding_failed

2015-11-26 Thread Mooney, Sean K
Am when you say dpdk interface do you mean dpdk physical interface is not 
reciving any packets or a vhost-user interface.

Can you provide the output of ovs-vsctl show.
And sudo /opt/stack/DPDK-v2.1.0/tools/dpdk_nic_bind.py –status

You should see an output similar to this.
Network devices using DPDK-compatible driver

:02:00.0 'Ethernet Controller XL710 for 40GbE QSFP+' drv=igb_uio unused=i40e

Network devices using kernel driver
===
:02:00.1 'Ethernet Controller XL710 for 40GbE QSFP+' if=ens785f1 drv=i40e 
unused=igb_uio
:02:00.2 'Ethernet Controller XL710 for 40GbE QSFP+' if=ens785f2 drv=i40e 
unused=igb_uio
:02:00.3 'Ethernet Controller XL710 for 40GbE QSFP+' if=ens785f3 drv=i40e 
unused=igb_uio
:06:00.0 'I350 Gigabit Network Connection' if=enp6s0f0 drv=igb 
unused=igb_uio
:06:00.1 'I350 Gigabit Network Connection' if=enp6s0f1 drv=igb 
unused=igb_uio

Other network devices
=



From: Praveen MANKARA RADHAKRISHNAN [mailto:praveen.mank...@6wind.com]
Sent: Thursday, November 26, 2015 9:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-ovs-dpdk] VM creation fails with 
Unexpected vif_type=binding_failed

Hi,

One thing i observed is there is no packets coming to the dpdk interface (data 
network).
I have verified it with the tcpdump using mirror interface.
and if i assign ip address and ping each other the data network bridges that is 
also not working.
could this be a possible cause for the nova exception? (NovaException: 
Unexpected vif_type=binding_failed)

Thanks
Praveen

On Tue, Nov 24, 2015 at 3:28 PM, Praveen MANKARA RADHAKRISHNAN 
<praveen.mank...@6wind.com<mailto:praveen.mank...@6wind.com>> wrote:
Hi Sean,

Thanks for the reply.

Please find the logs attached.
ovs-dpdk is correctly running in compute.

Thanks
Praveen

On Tue, Nov 24, 2015 at 3:04 PM, Mooney, Sean K 
<sean.k.moo...@intel.com<mailto:sean.k.moo...@intel.com>> wrote:
Hi would you be able to attach the
n-cpu log form the computenode  and  the
n-sch and q-svc logs for the controller so we can see if there is a stack trace 
relating to the
vm boot.

Also can you confirm ovs-dpdk is running correctly on the compute node by 
running
sudo service ovs-dpdk status

the neutron and networking-ovs-dpdk commits are from their respective 
stable/kilo branches so they should be compatible
provided no breaking changes have been merged to either branch.

regards
sean.

From: Praveen MANKARA RADHAKRISHNAN 
[mailto:praveen.mank...@6wind.com<mailto:praveen.mank...@6wind.com>]
Sent: Tuesday, November 24, 2015 1:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-ovs-dpdk] VM creation fails with 
Unexpected vif_type=binding_failed

Hi Przemek,

Thanks For the response,

Here are the commit ids for Neutron and networking-ovs-dpdk

[stack@localhost neutron]$ git log --format="%H" -n 1
026bfc6421da796075f71a9ad4378674f619193d
[stack@localhost neutron]$ cd ..
[stack@localhost ~]$ cd networking-ovs-dpdk/
[stack@localhost networking-ovs-dpdk]$  git log --format="%H" -n 1
90dd03a76a7e30cf76ecc657f23be8371b1181d2

The Neutron agents are up and running in compute node.

Thanks
Praveen


On Tue, Nov 24, 2015 at 12:57 PM, Czesnowicz, Przemyslaw 
<przemyslaw.czesnow...@intel.com<mailto:przemyslaw.czesnow...@intel.com>> wrote:
Hi Praveen,

There’s been some changes recently to networking-ovs-dpdk, it no longer host’s 
a mech driver as the openviswitch mech driver in Neutron supports vhost-user 
ports.
I guess something went wrong and the version of Neutron is not matching 
networking-ovs-dpdk. Can you post commit ids of Neutron and networking-ovs-dpdk.

The other possibility is that the Neutron agent is not running/died on the 
compute node.
Check with:
neutron agent-list

Przemek

From: Praveen MANKARA RADHAKRISHNAN 
[mailto:praveen.mank...@6wind.com<mailto:praveen.mank...@6wind.com>]
Sent: Tuesday, November 24, 2015 12:18 PM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: [openstack-dev] [networking-ovs-dpdk] VM creation fails with 
Unexpected vif_type=binding_failed

Hi,

Am trying to set up an open stack (kilo) installation using ovs-dpdk through 
devstack installation.

I have followed the " 
https://github.com/openstack/networking-ovs-dpdk/blob/master/doc/source/getstarted.rst
 " documentation.

I used the same versions as in documentation (fedora21, with right kernel).

My openstack installation is successful in both controller and compute.
I have used example local.conf given in the documentation.
But if i try to spawn the VM. I am having the following error.

"NovaException: Unexpected vif_type=binding_failed"

It

Re: [openstack-dev] [networking-ovs-dpdk] VM creation fails with Unexpected vif_type=binding_failed

2015-11-26 Thread Mooney, Sean K
Openstack uses the hostname as a primary key in many of the project.
Nova and neutron both do this.
If you had two nodes with the same host name then it would cause undefined 
behavior. 

Based on the error Andreas highlighted  are you currently trying to configure 
ovs-dpdk with vxlan/gre?

I also noticed that the getting started guide you linked to earlier was for the 
master branch(mitaka) but
You mentioned you were deploying kilo.
The local.conf settings will be different in both case.





-Original Message-
From: Andreas Scheuring [mailto:scheu...@linux.vnet.ibm.com] 
Sent: Thursday, November 26, 2015 1:43 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-ovs-dpdk] VM creation fails with 
Unexpected vif_type=binding_failed

Praveen,
there are many error in your q-svc log.
It says:

InvalidInput: Invalid input for operation: (u'Tunnel IP %(ip)s in use with host 
%(host)s', {'ip': u'10.81.1.150', 'host':
u'localhost.localdomain'}).\n"]


Did you maybe specify duplicated ips in your controllers and compute nodes 
neutron tunnel config?

Or did you change the hostname after installation

Or maybe the code has trouble with duplicated host names?

--
Andreas
(IRC: scheuran)



On Di, 2015-11-24 at 15:28 +0100, Praveen MANKARA RADHAKRISHNAN wrote:
> Hi Sean, 
> 
> 
> Thanks for the reply. 
> 
> 
> Please find the logs attached. 
> ovs-dpdk is correctly running in compute.
> 
> 
> Thanks
> Praveen 
> 
> On Tue, Nov 24, 2015 at 3:04 PM, Mooney, Sean K
> <sean.k.moo...@intel.com> wrote:
> Hi would you be able to attach the
> 
> n-cpu log form the computenode  and  the
> 
> n-sch and q-svc logs for the controller so we can see if there
> is a stack trace relating to the
> 
> vm boot.
> 
>  
> 
> Also can you confirm ovs-dpdk is running correctly on the
> compute node by running 
> 
> sudo service ovs-dpdk status
> 
>  
> 
> the neutron and networking-ovs-dpdk commits are from their
> respective stable/kilo branches so they should be compatible
> 
> provided no breaking changes have been merged to either
> branch.
> 
>  
> 
> regards
> 
> sean.
> 
>  
> 
> From: Praveen MANKARA RADHAKRISHNAN
> [mailto:praveen.mank...@6wind.com] 
> Sent: Tuesday, November 24, 2015 1:39 PM
> To: OpenStack Development Mailing List (not for usage
> questions)
> Subject: Re: [openstack-dev] [networking-ovs-dpdk] VM creation
> fails with Unexpected vif_type=binding_failed
> 
>  
> 
> Hi Przemek,
> 
>  
> 
> 
> Thanks For the response, 
> 
> 
>  
> 
> 
> Here are the commit ids for Neutron and networking-ovs-dpdk 
> 
> 
>  
> 
> 
> [stack@localhost neutron]$ git log --format="%H" -n 1
> 
> 
> 026bfc6421da796075f71a9ad4378674f619193d
> 
> 
> [stack@localhost neutron]$ cd ..
> 
> 
> [stack@localhost ~]$ cd networking-ovs-dpdk/
> 
> 
> [stack@localhost networking-ovs-dpdk]$  git log --format="%H"
> -n 1
> 
> 
> 90dd03a76a7e30cf76ecc657f23be8371b1181d2
> 
> 
>  
> 
> 
> The Neutron agents are up and running in compute node. 
> 
> 
>  
> 
> 
> Thanks 
> 
> 
> Praveen
> 
> 
>  
> 
> 
>  
> 
> On Tue, Nov 24, 2015 at 12:57 PM, Czesnowicz, Przemyslaw
> <przemyslaw.czesnow...@intel.com> wrote:
> 
> Hi Praveen,
> 
>  
> 
> There’s been some changes recently to
> networking-ovs-dpdk, it no longer host’s a mech driver
> as the openviswitch mech driver in Neutron supports
> vhost-user ports.
> 
> I guess something went wrong and the version of
> Neutron is not matching networking-ovs-dpdk. Can you
> post commit ids of Neutron and networking-ovs-dpdk.
> 
> 

Re: [openstack-dev] [networking-ovs-dpdk] VM creation fails with Unexpected vif_type=binding_failed

2015-11-24 Thread Mooney, Sean K
Hi would you be able to attach the
n-cpu log form the computenode  and  the
n-sch and q-svc logs for the controller so we can see if there is a stack trace 
relating to the
vm boot.

Also can you confirm ovs-dpdk is running correctly on the compute node by 
running
sudo service ovs-dpdk status

the neutron and networking-ovs-dpdk commits are from their respective 
stable/kilo branches so they should be compatible
provided no breaking changes have been merged to either branch.

regards
sean.

From: Praveen MANKARA RADHAKRISHNAN [mailto:praveen.mank...@6wind.com]
Sent: Tuesday, November 24, 2015 1:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-ovs-dpdk] VM creation fails with 
Unexpected vif_type=binding_failed

Hi Przemek,

Thanks For the response,

Here are the commit ids for Neutron and networking-ovs-dpdk

[stack@localhost neutron]$ git log --format="%H" -n 1
026bfc6421da796075f71a9ad4378674f619193d
[stack@localhost neutron]$ cd ..
[stack@localhost ~]$ cd networking-ovs-dpdk/
[stack@localhost networking-ovs-dpdk]$  git log --format="%H" -n 1
90dd03a76a7e30cf76ecc657f23be8371b1181d2

The Neutron agents are up and running in compute node.

Thanks
Praveen


On Tue, Nov 24, 2015 at 12:57 PM, Czesnowicz, Przemyslaw 
> wrote:
Hi Praveen,

There’s been some changes recently to networking-ovs-dpdk, it no longer host’s 
a mech driver as the openviswitch mech driver in Neutron supports vhost-user 
ports.
I guess something went wrong and the version of Neutron is not matching 
networking-ovs-dpdk. Can you post commit ids of Neutron and networking-ovs-dpdk.

The other possibility is that the Neutron agent is not running/died on the 
compute node.
Check with:
neutron agent-list

Przemek

From: Praveen MANKARA RADHAKRISHNAN 
[mailto:praveen.mank...@6wind.com]
Sent: Tuesday, November 24, 2015 12:18 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [networking-ovs-dpdk] VM creation fails with 
Unexpected vif_type=binding_failed

Hi,

Am trying to set up an open stack (kilo) installation using ovs-dpdk through 
devstack installation.

I have followed the " 
https://github.com/openstack/networking-ovs-dpdk/blob/master/doc/source/getstarted.rst
 " documentation.

I used the same versions as in documentation (fedora21, with right kernel).

My openstack installation is successful in both controller and compute.
I have used example local.conf given in the documentation.
But if i try to spawn the VM. I am having the following error.

"NovaException: Unexpected vif_type=binding_failed"

It would be really helpful if you can point out how to debug and fix this error.

Thanks
Praveen


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovs-dpdk]

2015-11-24 Thread Mooney, Sean K
Out of interest
Have you removed apparmor or placed all Libvirt apparmor profies into complain 
mode?

If not you will get permission denied errors.

You can confirm by checking dmesg to see if you have any permission denied 
messages from apparmor
Or run aa-status and see if the the Libvirt profie is in enforce/complain mode.

The  /tmp/qemu.orig file is just a file we write the original qemu command to 
for debugging. It is not needed
But all uses should be able to read/write to /tmp.

We wrap the qemu/kvm binary with a script that on Ubuntu can be found here 
/usr/bin/kvm

If you comment out echo "qemu ${args[@]}" > /tmp/qemu.orig in this script it 
will silence that warning.

https://github.com/openstack/networking-ovs-dpdk/blob/master/devstack/libs/ovs-dpdk#L104

I may remove this from our wrapper script as we most never use it for debugging 
 anymore however in the past it was
Useful to compare the original qemu command line and the update qemu command 
line.

I don’t know if I have mentioned this before but we also have a Ubuntu version 
of our getting start guide that should merge shortly

https://review.openstack.org/#/c/243190/6/doc/source/getstarted/ubuntu.rst

Regards
Sean.

From: Prathyusha Guduri [mailto:prathyushaconne...@gmail.com]
Sent: Tuesday, November 24, 2015 12:42 PM
To: Mooney, Sean K
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-ovs-dpdk]

Hi All,

I also found another error while launching an instance.

libvirtError: internal error: process exited while connecting to monitor: 
/usr/bin/kvm-spice: line 42: /tmp/qemu.orig: Permission denied
I dont want to change any permissions manually and again face the dependency 
issues. So kindly help
Thanks,
Prathyusha



On Tue, Nov 24, 2015 at 4:02 PM, Prathyusha Guduri 
<prathyushaconne...@gmail.com<mailto:prathyushaconne...@gmail.com>> wrote:
Hi Sean,
Thanks for you kind help.
I did the following.

# apt-get install ubuntu-cloud-keyring
# echo "deb 
http://ubuntu-cloud.archive.canonical.com/ubuntu<http://www.google.com/url?q=http%3A%2F%2Fubuntu-cloud.archive.canonical.com%2Fubuntu=D=1=AFQjCNGlCfoplf1zSbILNxCSVK23zRxa2A>"
 \
"trusty-updates/kilo main" > /etc/apt/sources.list.d/cloudarchive-kilo.list
# apt-get update && apt-get dist-upgrade
and then uninstalled the libvirt and qemu that were installed manually and then 
ran stack.sh after cleaning and unstacking.
Now fortunately libvirt and qemu satisfy minimum requirements.

$ virsh --version
1.2.12

$ kvm --version
/usr/bin/kvm: line 42: /tmp/qemu.orig: Permission denied
QEMU emulator version 2.2.0 (Debian 1:2.2+dfsg-5expubuntu9.3~cloud0), Copyright 
(c) 2003-2008 Fabrice Bellard

Am using an ubuntu 14.04 system
$ uname -a
Linux ubuntu-Precision-Tower-5810 3.13.0-24-generic #46-Ubuntu SMP Thu Apr 10 
19:11:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
After stack.sh which was successful, tried creating a new instance - which gave 
an ERROR again.

$ nova list
+--++++-+---+
| ID   | Name   | Status | Task State | 
Power State | Networks  |
+--++++-+---+
| 31a7e160-d04c-4216-91cf-30ce86c2b1fa | demo-instance1 | ERROR  | -  | 
NOSTATE | private=10.0.0.3, fd34:f4c5:412:0:f816:3eff:fea4:b9fe |

$ sudo service ovs-dpdk status
sourcing config
ovs alive
VHOST_CONFIG: bind to /var/run/openvswitch/vhufb8052e5-d3
2015-11-24T10:23:25Z|00126|dpdk|INFO|Socket /var/run/openvswitch/vhufb8052e5-d3 
created for vhost-user port vhufb8052e5-d3
2015-11-24T10:23:25Z|4|dpif_netdev(pmd18)|INFO|Core 2 processing port 
'vhufb8052e5-d3'
2015-11-24T10:23:25Z|2|dpif_netdev(pmd19)|INFO|Core 8 processing port 
'dpdk0'
2015-11-24T10:23:25Z|00127|bridge|INFO|bridge br-int: added interface 
vhufb8052e5-d3 on port 6
2015-11-24T10:23:25Z|5|dpif_netdev(pmd18)|INFO|Core 2 processing port 
'dpdk0'
2015-11-24T10:23:26Z|00128|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 
0 s (1 deletes)
2015-11-24T10:23:26Z|00129|ofp_util|INFO|normalization changed ofp_match, 
details:
2015-11-24T10:23:26Z|00130|ofp_util|INFO| pre: in_port=5,nw_proto=58,tp_src=136
2015-11-24T10:23:26Z|00131|ofp_util|INFO|post: in_port=5
2015-11-24T10:23:26Z|00132|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 
0 s (1 deletes)
2015-11-24T10:23:26Z|00133|connmgr|INFO|br-int<->unix: 1 flow_mods in the last 
0 s (1 deletes)
2015-11-24T10:23:29Z|00134|bridge|WARN|could not open network device 
vhufb8052e5-d3 (No such device)
VHOST_CONFIG: socket created, fd:52
VHOST_CONFIG: bind to /var/run/openvswitch/vhufb8052e5-d3
2015-11-24T10:23:29Z|00135|d

Re: [openstack-dev] [networking-ovs-dpdk]

2015-11-23 Thread Mooney, Sean K
9:59.654 TRACE nova.virt.libvirt.host   File 
"/usr/local/lib/python2.7/dist-packages/eventlet/tpool.py", line 80, in tworker
2015-11-23 13:19:59.654 TRACE nova.virt.libvirt.host rv = meth(*args, 
**kwargs)
2015-11-23 13:19:59.654 TRACE nova.virt.libvirt.host   File 
"/usr/local/lib/python2.7/dist-packages/libvirt.py", line 105, in openAuth
2015-11-23 13:19:59.654 TRACE nova.virt.libvirt.host if ret is None:raise 
libvirtError('virConnectOpenAuth() failed')
2015-11-23 13:19:59.654 TRACE nova.virt.libvirt.host libvirtError: error from 
service: CheckAuthorization: Did not receive a reply. Possible causes include: 
the remote application did not send a reply, the message bus security policy 
blocked the reply, the reply timeout expired, or the network connection was 
broken.
2015-11-23 13:19:59.654 TRACE nova.virt.libvirt.host
Traceback (most recent call last):
I suspect this is because, I manually installed libvirt and qemu. My doubt is 
why devstack is not installing a correct version when it is supposed to. why a 
version less than min requirement is being installed???
Now that because am installing manually, there might be a problem with groups - 
devstack creates some group and installs but manual installation doesn't bother 
about that groups.
Can you please suggest a way on how do avoid that???

Also, I just want to make sure that the agent running is neutron-openvswitch 
only. No ovsdpdk agent running.
$ ps -Al | grep neutron
0 S  1000  8882  8859  3  80   0 - 49946 ep_pol pts/34   00:02:24 
neutron-openvsw
But
$ neutron agent-list
3385a430-5738-43cb-b853-059add5ab602 | DPDK OVS Agent | 
ubuntu-Precision-Tower-5810 | :-)   | True   | neutron-openvswitch-agent
So this implies that dpdk agent is running right??? I remember reading in 
launchpad bugs that ovsdpdk agent is removed and that now openvswitch takes 
care of everything. Just wanted to confirm that my setup has ovs-dpdk running.
Regards,
Prathyusha


On Wed, Nov 18, 2015 at 7:23 PM, James Page 
<james.p...@ubuntu.com<mailto:james.p...@ubuntu.com>> wrote:
Hi Sean

On Wed, Nov 18, 2015 at 12:30 PM, Mooney, Sean K 
<sean.k.moo...@intel.com<mailto:sean.k.moo...@intel.com>> wrote:
Hi james
Yes we are planning on testing the packaged release to see if it is compatible 
with our ml2 driver and the
Changes we are submitting upstream. If it is we will add a use binary flag to 
our devstack plugin to skip the
Compilation step and use that instead on 15.10 or 14.04 cloud-archive:liberty

Excellent.

As part of your packaging did ye fix pciutils to correctly report the unused 
drivers when an interface is bound
The dpdk driver? Also does it support both igb_uio and/or vfio-pci drivers for 
dpdk interface?

Re pcituils, we've not done any work in that area - can you give an example of 
what you would expect?

The dpdk package supports both driver types in /etc/dpdk/interfaces - when you 
declare an adapter for use, you get to specify the module you want to use as 
well; we're relying the in-tree kernel drivers (uio-pci-generic and vfio-pci) 
right now.


Anyway yes I hope to check it out and seeing what ye have done. When ovs-dpdk 
starts getting packaged in more operating systems
We will probably swap our default to the binary install though we will keep the 
source install option as it allows us to work on new features
Before they are packaged and to have better performance.

That sounds sensible; re 'better performance' - yeah we do have to baseline the 
optimizations at compile time right now (ssse3 only right now) , but I really 
hope that does change so that we can move to a runtime CPU feature detection 
model, allowing the best possible performance through the packages we have in 
Ubuntu (or any other distribution for that matter).


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovs-dpdk]

2015-11-18 Thread Mooney, Sean K
Hi that is great to know.
I will internally report this behavior to our dpdk team
But I have already got a patch to change our default target to native-linuxapp
https://review.openstack.org/#/c/246375/ which should merge shortly.
Im glad it is now working for you.

From: Prathyusha Guduri [mailto:prathyushaconne...@gmail.com]
Sent: Wednesday, November 18, 2015 6:13 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-ovs-dpdk]

Thanks a lot Sean, that was helpful.
Changing the target from ivshmem to native-linuxapp removed the error and it 
doesn't hang at creating external bridge anymore.
All processes(nova-api, neutron, ovs-vswitchd, etc) did start.
Thanks,
Prathyusha

On Tue, Nov 17, 2015 at 7:57 PM, Mooney, Sean K 
<sean.k.moo...@intel.com<mailto:sean.k.moo...@intel.com>> wrote:
We mainly test with 2M hugepages not 1G however our ci does use 1G pages.
We recently noticed a different but unrelated related issue with using the 
ivshmem target when building dpdk.
(https://bugs.launchpad.net/networking-ovs-dpdk/+bug/1517032)

Instead of modifying dpdk can you try
Changing the default dpdk build target to x86_64-native-linuxapp-gcc.

This can be done by  adding

RTE_TARGET=x86_64-native-linuxapp-gcc to the local.conf
And removing the following file to force a rebuild 
“/opt/stack/ovs/BUILD_COMPLETE”

I agree with your assessment though this appears to be a timing issue in dpdk 
2.0



From: Prathyusha Guduri 
[mailto:prathyushaconne...@gmail.com<mailto:prathyushaconne...@gmail.com>]
Sent: Tuesday, November 17, 2015 1:42 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-ovs-dpdk]

Here is stack.sh log -

2015-11-17 13:38:50.010 | Loading uio module
2015-11-17 13:38:50.028 | Loading DPDK UIO module
2015-11-17 13:38:50.038 | starting ovs db
2015-11-17 13:38:50.038 | binding nics
2015-11-17 13:38:50.039 | starting vswitchd
2015-11-17 13:38:50.190 | sudo RTE_SDK=/opt/stack/DPDK-v2.0.0 RTE_TARGET=build 
/opt/stack/DPDK-v2.0.0/tools/dpdk_nic_bind.py -b igb_uio :07:00.0
2015-11-17 13:38:50.527 | sudo ovs-vsctl --no-wait --may-exist add-port br-eth1 
dpdk0 -- set Interface dpdk0 type=dpdk
2015-11-17 13:38:51.671 | Waiting for ovs-vswitchd to start...
2015-11-17 13:38:52.685 | Waiting for ovs-vswitchd to start...
2015-11-17 13:38:53.702 | Waiting for ovs-vswitchd to start...
2015-11-17 13:38:54.720 | Waiting for ovs-vswitchd to start...
2015-11-17 13:38:55.733 | Waiting for ovs-vswitchd to start...
2015-11-17 13:38:56.749 | Waiting for ovs-vswitchd to start...
2015-11-17 13:38:57.768 | Waiting for ovs-vswitchd to start...
2015-11-17 13:38:58.787 | Waiting for ovs-vswitchd to start...
2015-11-17 13:38:59.802 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:00.818 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:01.836 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:02.849 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:03.866 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:04.884 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:05.905 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:06.923 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:07.937 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:08.956 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:09.973 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:10.988 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:12.004 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:13.022 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:14.040 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:15.060 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:16.073 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:17.089 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:18.108 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:19.121 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:20.138 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:21.156 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:22.169 | Waiting for ovs-vswitchd to start...
2015-11-17 13:39:23.185 | Waiting for ovs-vswitchd to start...

On Tue, Nov 17, 2015 at 6:50 PM, Prathyusha Guduri 
<prathyushaconne...@gmail.com<mailto:prathyushaconne...@gmail.com>> wrote:
Hi Sean,
Here is ovs-vswitchd.log

2015-11-13T12:48:01Z|1|dpdk|INFO|User-provided -vhost_sock_dir in use: 
/var/run/openvswitch
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 1 on socket 0
EAL: Detected lcore 2 as core 2 on socket 0
EAL: Detected lcore 3 as core 3 on socket 0
EAL: Detected lcore 4 as core 4 on socket 0
EAL: Detected lcore 5 as core 5 on socket 0
EAL: Detected lcore 6 as core 0 on socket 0
EAL: Detected lcore 7 as core 1 on socket 0
EAL: Detected lcore 8 as core 2 on socket 0
EAL: Detected lcore 9 as core 3 on socket 0
EAL: Detected 

Re: [openstack-dev] [networking-ovs-dpdk]

2015-11-17 Thread Mooney, Sean K
Can you provide the ovs-vswitchd log form ${OVS_LOG_DIR}/ovs-vswitchd.log
/tmp/ovs-vswitchd.log in your case.

If the vswitch fails to start we clean up by unmounting the hugepages.



From: Prathyusha Guduri [mailto:prathyushaconne...@gmail.com]
Sent: Tuesday, November 17, 2015 7:37 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-ovs-dpdk]

Hi Sean,
I realised on debugging ovs-dpdk-init script that the main issue is with the 
following command

$ screen -dms ovs-vswitchd sudo sg $qemu_group -c "umask 002; 
${OVS_INSTALL_DIR}/sbin/ovs-vswitchd --dpdk -vhost_sock_dir $OVS_DB_SOCKET_DIR 
-c $OVS_CORE_MASK -n $OVS_MEM_CHANNELS  --proc-type primary  --huge-dir 
$OVS_HUGEPAGE_MOUNT --socket-mem $OVS_SOCKET_MEM $pciAddressWhitelist -- 
unix:$OVS_DB_SOCKET 2>&1 | tee ${OVS_LOG_DIR}/ovs-vswitchd.log"

which I guess is starting the ovs-vswitchd application. Before this command, 
huge pages is mounted and port binding is also done but still the screen 
command fails.
I verified the db.sock and conf.db files.
Any help is highly appreciated.
Thanks,
Prathyusha


On Mon, Nov 16, 2015 at 5:12 PM, Prathyusha Guduri 
<prathyushaconne...@gmail.com<mailto:prathyushaconne...@gmail.com>> wrote:
Hi Sean,
Thanks for your response.
in your case though you are using 1GB hugepages so I don’t think this is 
related to memory fragmentation
or a lack of free hugepages.

to use preallocated 1GB page with ovs you should instead set the following in 
your local.conf

OVS_HUGEPAGE_MOUNT_PAGESIZE=1G
OVS_ALLOCATE_HUGEPAGES=False

Added the above two parameters to the local.conf. The same problem again.
Basically it throws this error -
2015-11-16 11:31:44.741 | starting vswitchd
2015-11-16 11:31:44.863 | sudo RTE_SDK=/opt/stack/DPDK-v2.0.0 RTE_TARGET=build 
/opt/stack/DPDK-v2.0.0/tools/dpdk_nic_bind.py -b igb_uio :07:00.0
2015-11-16 11:31:45.169 | sudo ovs-vsctl --no-wait --may-exist add-port br-eth1 
dpdk0 -- set Interface dpdk0 type=dpdk
2015-11-16 11:31:46.314 | Waiting for ovs-vswitchd to start...
2015-11-16 11:31:47.442 | libvirt-bin stop/waiting
2015-11-16 11:31:49.473 | libvirt-bin start/running, process 2255
2015-11-16 11:31:49.477 | [ERROR] /etc/init.d/ovs-dpdk:563 ovs-vswitchd 
application failed to start

manually mounting /mnt/huge and then commenting that part from the 
/etc/init.d/ovs-dpdk script also throws the same error.
Using 1G hugepagesize should not give any memory related problem. I dont 
understand why it is not mounting then.
Here is the /opt/stack/networking-ovs-dpdk/devstack/ovs-dpdk/ovs-dpdk.conf

RTE_SDK=${RTE_SDK:-/opt/stack/DPDK}
RTE_TARGET=${RTE_TARGET:-x86_64-ivshmem-linuxapp-gcc}

OVS_INSTALL_DIR=/usr
OVS_DB_CONF_DIR=/etc/openvswitch
OVS_DB_SOCKET_DIR=/var/run/openvswitch
OVS_DB_CONF=$OVS_DB_CONF_DIR/conf.db
OVS_DB_SOCKET=OVS_DB_SOCKET_DIR/db.sock

OVS_SOCKET_MEM=2048,2048
OVS_MEM_CHANNELS=4
OVS_CORE_MASK=${OVS_CORE_MASK:-2}
OVS_PMD_CORE_MASK=${OVS_PMD_CORE_MASK:-4}
OVS_LOG_DIR=/tmp
OVS_LOCK_DIR=''
OVS_SRC_DIR=/opt/stack/ovs
OVS_DIR=${OVS_DIR:-${OVS_SRC_DIR}}
OVS_UTILS=${OVS_DIR}/utilities/
OVS_DB_UTILS=${OVS_DIR}/ovsdb/
OVS_DPDK_DIR=$RTE_SDK
OVS_NUM_HUGEPAGES=${OVS_NUM_HUGEPAGES:-5}
OVS_HUGEPAGE_MOUNT=${OVS_HUGEPAGE_MOUNT:-/mnt/huge}
OVS_HUGEPAGE_MOUNT_PAGESIZE=''
OVS_BOND_MODE=$OVS_BOND_MODE
OVS_BOND_PORTS=$OVS_BOND_PORTS
OVS_BRIDGE_MAPPINGS=eth1
OVS_PCI_MAPPINGS=:07:00.0#eth1
OVS_DPDK_PORT_MAPPINGS=''
OVS_TUNNEL_CIDR_MAPPING=''
OVS_ALLOCATE_HUGEPAGES=True
OVS_INTERFACE_DRIVER='igb_uio'
Verified the OVS_DB_SOCKET_DIR and all others. conf.db and db.sock exist. So 
why ovs-vswitchd is failing to start??? Am I missing something???


Thanks,
Prathyusha


On Mon, Nov 16, 2015 at 4:39 PM, Mooney, Sean K 
<sean.k.moo...@intel.com<mailto:sean.k.moo...@intel.com>> wrote:

Hi

Yes sorry for the delay in responding to you and samta.

In your case assuming you are using 2mb hugepages it is easy to hit dpdks 
default max memory segments

This can be changed by setting OVS_DPDK_MEM_SEGMENTS=
In the local.conf and recompiling. To do this simply remove the build complete 
file in /opt/stack/ovs
rm –f /opt/stack/BUILD_COMPLETE

in your case though you are using 1GB hugepages so I don’t think this is 
related to memory fragmentation
or a lack of free hugepages.

to use preallocated 1GB page with ovs you should instead set the following in 
your local.conf

OVS_HUGEPAGE_MOUNT_PAGESIZE=1G
OVS_ALLOCATE_HUGEPAGES=False

Regards
sean

From: Prathyusha Guduri 
[mailto:prathyushaconne...@gmail.com<mailto:prathyushaconne...@gmail.com>]
Sent: Monday, November 16, 2015 6:20 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-ovs-dpdk]

Hi all,

I have a similar problem as Samta. Am also stuck at the same place. The 
following command

$sudo ovs-vsctl br-set-external-id br-ex bridge-id br-ex

hangs forever. As Sean said, it might be because of ovs-vswitchd proces.

Re: [openstack-dev] [networking-ovs-dpdk]

2015-11-17 Thread Mooney, Sean K
5534
2015-11-17 11:26:53.197 | 2015-11-17T11:26:53Z|00011|dpif_netlink|ERR|Generic 
Netlink family 'ovs_datapath' does not exist. The Open vSwitch kernel module is 
probably not loaded.
2015-11-17 11:26:53.287 | Zone 0: name:, phys:0x9b60, 
len:0xb0, virt:0x7fca8ea0, socket_id:0, flags:0
2015-11-17 11:26:53.287 | Zone 1: name:, phys:0x3660, 
len:0x2080, virt:0x7fcab360, socket_id:0, flags:0
2015-11-17 11:26:53.287 | Zone 2: name:, phys:0x9c10, 
len:0x28a0c0, virt:0x7fca8f50, socket_id:0, flags:0
2015-11-17 11:26:53.287 | Zone 3: name:, phys:0x36602080, 
len:0x1f400, virt:0x7fcab3602080, socket_id:0, flags:0
2015-11-17 11:26:53.287 | PMD: eth_em_tx_queue_setup(): sw_ring=0x7fca8f4efd40 
hw_ring=0x7fcab3621480 dma_addr=0x36621480
2015-11-17 11:26:53.287 | PMD: eth_em_rx_queue_setup(): sw_ring=0x7fca8f4ebc40 
hw_ring=0x7fcab3631480 dma_addr=0x36631480
2015-11-17 11:26:53.368 | PMD: eth_em_start(): <<
2015-11-17 11:26:53.368 | 2015-11-17T11:26:53Z|00012|dpdk|INFO|Port 0: 
68:05:ca:1b:ca:c9
2015-11-17 11:26:53.405 | PMD: eth_em_tx_queue_setup(): sw_ring=0x7fca8f4efe00 
hw_ring=0x7fcab3621480 dma_addr=0x36621480
2015-11-17 11:26:53.405 | PMD: eth_em_rx_queue_setup(): sw_ring=0x7fca8f4ebdc0 
hw_ring=0x7fcab3631480 dma_addr=0x36631480
2015-11-17 11:26:53.486 | PMD: eth_em_start(): <<
2015-11-17 11:26:53.486 | 2015-11-17T11:26:53Z|00013|dpdk|INFO|Port 0: 
68:05:ca:1b:ca:c9
2015-11-17 11:26:53.487 | 2015-11-17T11:26:53Z|00014|dpif_netdev|INFO|Created 1 
pmd threads on numa node 0
2015-11-17 11:26:53.487 | 
2015-11-17T11:26:53Z|1|dpif_netdev(pmd10)|INFO|Core 0 processing port 
'dpdk0'
2015-11-17 11:26:53.488 | 
2015-11-17T11:26:53Z|2|dpif_netdev(pmd10)|INFO|Core 0 processing port 
'dpdk0'
2015-11-17 11:26:53.488 | 2015-11-17T11:26:53Z|00015|bridge|INFO|bridge 
br-eth1: added interface dpdk0 on port 1
2015-11-17 11:26:53.488 | 2015-11-17T11:26:53Z|00016|bridge|INFO|bridge br-int: 
added interface br-int on port 65534
2015-11-17 11:26:53.488 | 2015-11-17T11:26:53Z|00017|bridge|INFO|bridge 
br-eth1: using datapath ID 6805ca1bcac9
2015-11-17 11:26:53.488 | 2015-11-17T11:26:53Z|00018|connmgr|INFO|br-eth1: 
added service controller "punix:/var/run/openvswitch/br-eth1.mgmt"
2015-11-17 11:26:53.489 | 2015-11-17T11:26:53Z|00019|bridge|INFO|bridge br-int: 
using datapath ID 2ef7b66a8742
2015-11-17 11:26:53.489 | 2015-11-17T11:26:53Z|00020|connmgr|INFO|br-int: added 
service controller "punix:/var/run/openvswitch/br-int.mgmt"
2015-11-17 11:26:53.490 | 2015-11-17T11:26:53Z|00021|dpif_netdev|INFO|Created 2 
pmd threads on numa node 0
2015-11-17 11:26:53.492 | 2015-11-17T11:26:53Z|00022|bridge|INFO|ovs-vswitchd 
(Open vSwitch) 2.4.90
2015-11-17 11:26:53.493 | 
2015-11-17T11:26:53Z|1|dpif_netdev(pmd23)|INFO|Core 2 processing port 
'dpdk0'
2015-11-17 11:27:03.494 | 2015-11-17T11:27:03Z|00023|memory|INFO|peak resident 
set size grew 93% in last 10.3 seconds, from 10680 kB to 20572 kB
2015-11-17 11:27:03.494 | 2015-11-17T11:27:03Z|00024|memory|INFO|handlers:4 
ports:3 revalidators:2 rules:10
ubuntu@ubuntu-Precision-Tower-5810:/opt/stack/DPDK-v2.0.0/lib/librte_eal/linuxapp/eal$<mailto:ubuntu@ubuntu-Precision-Tower-5810:/opt/stack/DPDK-v2.0.0/lib/librte_eal/linuxapp/eal$>
 ps -Al | grep ovs
5 S 0  1681  2595  0  80   0 -  4433 poll_s ?00:00:00 ovsdb-server
4 S 0  1716  1715  0  80   0 -  4636 wait   pts/300:00:00 ovs-dpdk
4 S 0  2124  1716 99  80   0 - 870841 poll_s pts/3   03:42:31 ovs-vswitchd
So now ovs-vswitchd runs unlike the last time.
 I really dont understand where am missing out....

On Tue, Nov 17, 2015 at 5:14 PM, Mooney, Sean K 
<sean.k.moo...@intel.com<mailto:sean.k.moo...@intel.com>> wrote:
Can you provide the ovs-vswitchd log form ${OVS_LOG_DIR}/ovs-vswitchd.log
/tmp/ovs-vswitchd.log in your case.

If the vswitch fails to start we clean up by unmounting the hugepages.


From: Prathyusha Guduri 
[mailto:prathyushaconne...@gmail.com<mailto:prathyushaconne...@gmail.com>]
Sent: Tuesday, November 17, 2015 7:37 AM

To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-ovs-dpdk]

Hi Sean,
I realised on debugging ovs-dpdk-init script that the main issue is with the 
following command

$ screen -dms ovs-vswitchd sudo sg $qemu_group -c "umask 002; 
${OVS_INSTALL_DIR}/sbin/ovs-vswitchd --dpdk -vhost_sock_dir $OVS_DB_SOCKET_DIR 
-c $OVS_CORE_MASK -n $OVS_MEM_CHANNELS  --proc-type primary  --huge-dir 
$OVS_HUGEPAGE_MOUNT --socket-mem $OVS_SOCKET_MEM $pciAddressWhitelist -- 
unix:$OVS_DB_SOCKET 2>&1 | tee ${OVS_LOG_DIR}/ovs-vswitchd.log"
which I guess is starting the ovs-vswitchd application. Before this command, 
huge pages is mounted and port binding is also done but still the screen 
command fails.
I verified the db.sock and conf.db files.
Any help is highly appreciated.
Thanks,
Prathyusha


On Mon, Nov 16, 2015 at 5:12 PM, Prathyusha Gudur

Re: [openstack-dev] [networking-ovs-dpdk]

2015-11-16 Thread Mooney, Sean K
s using DPDK-compatible driver

:07:00.0 '82574L Gigabit Network Connection' unused=igb_uio

Network devices using kernel driver
===
:00:19.0 'Ethernet Connection I217-LM' if=eth0 drv=e1000e unused=igb_uio 
*Active*
:06:02.0 '82540EM Gigabit Ethernet Controller' if=eth2 drv=e1000 
unused=igb_uio

Other network devices
=
None
Am using a 1G NIC card for the port (eth1) binds dpdk. Is that a problem??? 
Should dpdk binding port necessarily have a 10G NIC I dont think its a 
problem anyway because binding is done. Please correct me if am going wrong...
Thanks,
Prathyusha



On Wed, Nov 11, 2015 at 3:52 PM, Samta Rangare 
<samtarang...@gmail.com<mailto:samtarang...@gmail.com>> wrote:
Hi Sean,

Thanks for replying back, response inline.

On Mon, Nov 9, 2015 at 8:24 PM, Mooney, Sean K 
<sean.k.moo...@intel.com<mailto:sean.k.moo...@intel.com>> wrote:
> Hi
> Can you provide some more information regarding your deployment?
>
> Can you check which kernel you are using.
>
> uname -a

Linux ubuntu 3.16.0-50-generic #67~14.04.1-Ubuntu SMP Fri Oct 2 22:07:51 UTC 
2015 x86_64 x86_64 x86_64 GNU/Linux

>
> If you are using a 3.19 kernel changes to some locking code in the kennel 
> broke synchronization dpdk2.0 and requires dpdk 2.1 to be used instead.
> In general it is not advisable to use a 3.19 kernel with dpdk as it can lead 
> to non-deterministic behavior.
>
> When devstack hangs can you connect with a second ssh session and run
> sudo service ovs-dpdk status
> and
> ps aux | grep ovs
>
sudo service ovs-dpdk status
sourcing config
/opt/stack/logs/ovs-vswitchd.pid is not running
Not all processes are running restart!!!
1
ubuntu@ubuntu:~/samta/devstack$ ps -ef | grep ovs
root 13385 1  0 15:17 ?00:00:00 /usr/sbin/ovsdb-server --detach 
--pidfile=/opt/stack/logs/ovsdb-server.pid 
--remote=punix:/usr/local/var/run/openvswitch/db.sock 
--remote=db:Open_vSwitch,Open_vSwitch,manager_options
ubuntu   24451 12855  0 15:45 pts/000:00:00 grep --color=auto ovs

>
> When the deployment hangs at sudo ovs-vsctl br-set-external-id br-ex 
> bridge-id br-ex
> It usually means that the ovs-vswitchd process has exited.
>
The above result shows that ovs-vswitchd is not running.
> This can happen for a number of reasons.
> The vswitchd process may exit if it  failed to allocate memory (due to memory 
> fragmentation or lack of free hugepages)
> if the ovs-vswitchd.log is not available can you check the the hugepage mount 
> point was created in
> /mnt/huge And that Iis mounted
> Run
> ls -al /mnt/huge
> and
> mount
>
ls -al /mnt/huge
total 4
drwxr-xr-x 2 libvirt-qemu kvm 0 Nov 11 15:18 .
drwxr-xr-x 3 root root 4096 May 15 00:09 ..

ubuntu@ubuntu:~/samta/devstack$ mount
/dev/mapper/ubuntu--vg-root on / type ext4 (rw,errors=remount-ro)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
none on /sys/fs/cgroup type tmpfs (rw)
none on /sys/fs/fuse/connections type fusectl (rw)
none on /sys/kernel/debug type debugfs (rw)
none on /sys/kernel/security type securityfs (rw)
udev on /dev type devtmpfs (rw,mode=0755)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)
tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)
none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)
none on /run/shm type tmpfs (rw,nosuid,nodev)
none on /run/user type tmpfs (rw,noexec,nosuid,nodev,size=104857600,mode=0755)
none on /sys/fs/pstore type pstore (rw)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu type cgroup (rw,relatime,cpu)
cgroup on /sys/fs/cgroup/cpuacct type cgroup (rw,relatime,cpuacct)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,relatime,freezer)
cgroup on /sys/fs/cgroup/net_cls type cgroup (rw,relatime,net_cls)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,relatime,perf_event)
cgroup on /sys/fs/cgroup/net_prio type cgroup (rw,relatime,net_prio)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,relatime,hugetlb)
/dev/sda1 on /boot type ext2 (rw)
systemd on /sys/fs/cgroup/systemd type cgroup 
(rw,noexec,nosuid,nodev,none,name=systemd)
hugetlbfs-kvm on /run/hugepages/kvm type hugetlbfs (rw,mode=775,gid=106)
nodev on /mnt/huge type hugetlbfs (rw,uid=106,gid=106)
nodev on /mnt/huge type hugetlbfs (rw,uid=106,gid=106)

> then checkout how many hugepages are mounted
>
> cat /proc/meminfo | grep huge
>

cat /proc/meminfo | grep Huge
AnonHugePages:292864 kB
HugePages_Total:   5
HugePages_Free:5
HugePages_Rsvd:0
HugePages_

Re: [openstack-dev] [networking-ovs-dpdk] ovs-appctl doesn't function correctly

2015-11-16 Thread Mooney, Sean K
To use the ovs-appctl application you have to specify the socket path to the 
ovs-vswitchd process.
ovs-appctl -t /var/run/openvswitch/ list-commands
e.g.
ovs-appctl -t /var/run/openvswitch/ovs-vswitchd.10110.ctl list-commands



From: Hui Xiang [mailto:hui.xi...@canonical.com]
Sent: Monday, November 16, 2015 10:01 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [networking-ovs-dpdk] ovs-appctl doesn't function 
correctly

Hi all,

  I have managed to setup ovs-dpdk on Ubuntu, commands 'ovs-vsctl/ovs-ofctl' 
all work well excep 'ovs-appctl', could anyone help me to figure it out? Thanks 
in advance.


xianghui@xianghui:~$ sudo ovs-ofctl show br-int
OFPT_FEATURES_REPLY (xid=0x2): dpid:264fa4e1f24f
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src 
mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst
 1(int-br-eth1): addr:2e:23:93:7f:54:14
 config: 0
 state:  0
 speed: 0 Mbps now, 0 Mbps max
 2(tap7a2317aa-90): addr:0c:52:ff:7f:00:00
 config: PORT_DOWN
 state:  LINK_DOWN
 speed: 0 Mbps now, 0 Mbps max
 12(vhu5392206b-dc): addr:00:00:00:00:00:00
 config: PORT_DOWN
 state:  LINK_DOWN
 speed: 0 Mbps now, 0 Mbps max
 LOCAL(br-int): addr:26:4f:a4:e1:f2:4f
 config: PORT_DOWN
 state:  LINK_DOWN
 current:10MB-FD COPPER
 speed: 10 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

xianghui@xianghui:~$ sudo ovs-appctl vlog/set dbg
2015-11-16T09:50:28Z|1|daemon_unix|WARN|/var/run/openvswitch/ovs-vswitchd.pid:
 open: No such file or directory
ovs-appctl: cannot read pidfile "/var/run/openvswitch/ovs-vswitchd.pid" (No 
such file or directory)

Also tried as below, failed as well.

xianghui@xianghui:~$ sudo ovs-appctl -t /opt/stack/logs/ovs-vswitchd.pid 
vlog/set dbg
2015-11-16T09:58:15Z|1|unixctl|WARN|failed to connect to 
/opt/stack/logs/ovs-vswitchd.pid
ovs-appctl: cannot connect to "/opt/stack/logs/ovs-vswitchd.pid" (Connection 
refused)
xianghui@xianghui:~$ sudo ovs-appctl -t /var/run/openvswitch/db.sock vlog/set 
dbg
unknown methodovs-appctl: /var/run/openvswitch/db.sock: server returned an error



--
Best Regards.
Hui.

OpenStack Engineer

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovs-dpdk]

2015-11-09 Thread Mooney, Sean K
Hi
Can you provide some more information regarding your deployment?

Can you check which kernel you are using.

uname -a

If you are using a 3.19 kernel changes to some locking code in the kennel broke 
synchronization dpdk2.0 and requires dpdk 2.1 to be used instead.
In general it is not advisable to use a 3.19 kernel with dpdk as it can lead to 
non-deterministic behavior.

When devstack hangs can you connect with a second ssh session and run 
sudo service ovs-dpdk status
and 
ps aux | grep ovs


When the deployment hangs at sudo ovs-vsctl br-set-external-id br-ex bridge-id 
br-ex
It usually means that the ovs-vswitchd process has exited.

This can happen for a number of reasons.
The vswitchd process may exit if it  failed to allocate memory (due to memory 
fragmentation or lack of free hugepages)
if the ovs-vswitchd.log is not available can you check the the hugepage mount 
point was created in
/mnt/huge And that Iis mounted 
Run 
ls -al /mnt/huge 
and 
mount

then checkout how many hugepages are mounted

cat /proc/meminfo | grep huge


the vswitchd process may also exit if it  failed to initializes dpdk interfaces.
This can happen if no interface is  compatible with the igb-uio or vfio-pci 
drivers
(note in the vfio-pci case all interface in the same iommu group must be bound 
to the vfio-pci driver and
The iommu must be enabled in the kernel command line with VT-d enabled in the 
bios)

Can you  check which interface are bound to the dpdk driver by running the 
following command

/opt/stack/DPDK-v2.0.0/tools/dpdk_nic_bind.py --status


Finally can you confim that ovs-dpdk compiled successfully by either check the 
xstack.log or 
Checking for the BUILD_COMPLETE file in /opt/stack/ovs

Regards
sean




-Original Message-
From: Samta Rangare [mailto:samtarang...@gmail.com] 
Sent: Monday, November 9, 2015 2:31 PM
To: Czesnowicz, Przemyslaw
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [networking-ovs-dpdk]

Thanks for replying Przemyslaw, there is no ovs-vswitchd.log in 
/opt/stack/logs/. This is all contains inside (ovsdb-server.pid, screen).

When I cancel stack .sh (ctr c), and try to rerun this $sudo ovs-vsctl 
br-set-external-id br-ex bridge-id br-ex it didnt hang, that means vSwitch was 
running isn't it ?

But rerunning stack.sh after unstack hangs again.

Thanks,
Samta

On Mon, Nov 9, 2015 at 7:50 PM, Czesnowicz, Przemyslaw 
 wrote:
> Hi Samta,
>
> This usually means that the vSwitch is not running/has crashed.
> Can you check in /opt/stack/logs/ovs-vswitchd.log ? There should be an error 
> msg there.
>
> Regards
> Przemek
>
>> -Original Message-
>> From: Samta Rangare [mailto:samtarang...@gmail.com]
>> Sent: Monday, November 9, 2015 1:51 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: [openstack-dev] [networking-ovs-dpdk]
>>
>> Hello Everyone,
>>
>> I am installing devstack with networking-ovs-dpdk. The local.conf 
>> exactly looks like the one is available in /opt/stack/networking-ovs- 
>> dpdk/doc/source/_downloads/local.conf.single_node.
>> So I believe all the necessary configuration will be taken care.
>>
>> However I am stuck at place where devstack is trying to set 
>> external-id ($ sudo ovs-vsctl br-set-external-id br-ex bridge-id 
>> br-ex). As soon as it hits at this place it's just hangs forever. I 
>> tried commenting this line from
>> lib/neutron_plugin/ml2 (I know this is wrong) and then all services 
>> came up except ovs-dpdk agent and ovs agent.
>>
>> BTW I am deploying it in ubuntu 14.04. Any pointer will be really helpful.
>>
>> Thanks,
>> Samta
>>
>> __
>> 
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-
>> requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Dpdk-ovs] [networking-ovs-dpdk]networking-ovs-dpdk configuration on devstack

2015-11-09 Thread Mooney, Sean K
Hi this mailing list if for OVDK the intel for ovs with the dpdk dpif datapath.

The networking-ovs-dpdk repo support ovs from openvswitch.org with the dpdk 
netdev datapath.

Issue and support should be directed to the openstack dev mailing list 
(openstack-dev@lists.openstack.org) with the following in the subject 
"[networking-ovs-dpdk]"

That said I occasionally check this list tough very rarely.


While im hear Ill try and answer your questions


1.   Do we need to update any other files along with copying local.conf
to devstack directory?

Answer: the only lines that should need to be changed are 
https://github.com/openstack/networking-ovs-dpdk/blob/master/doc/source/_downloads/local.conf.single_node#L47-L49

2.   In OVS_BRIDGE_MAPPINGS="default:br-eth0”, do I need to change the
interface to some other interface?

Answer: The interface name chosen should be a dedicated interface as it will be 
unbound from the kernel networking stack and bound to the userspace dpdk driver.
As a result if you use your management interface it will result in a loss of 
conectivy to the node as the interface will be unbound from the kernel.


3.   I would like to build DPDK with “x86_64-native-linuxapp-gcc”,
where should I update this?

Answer: you can update this by setting the RTE_TARGET variable to 
“x86_64-native-linuxapp-gcc” in the local.conf 
This will override the default value set here 
https://github.com/openstack/networking-ovs-dpdk/blob/master/devstack/settings#L13
It should be noted that the default target " x86_64-ivshmem-linuxapp-gcc" 
contains all the functionality of x86_64-native-linuxapp-gcc  
+ support for ivshmem. Using “x86_64-native-linuxapp-gcc” is not tested but 
should work.


4.   I would like to create the ovs-dpdk vhostuser ports, with the
other two eth ports, ie.., eth1 & eth2. Where should I update the config for 
this?

Answer: Vhost-user is a northbound(between vm and vswitch) port type. It is not 
related to phyical interfaces.

If you want to have multiple phyical interfaces added to ovs-dpdk automatically 
you can do this by
Setting the following parameter in the local.conf
https://github.com/openstack/networking-ovs-dpdk/blob/master/devstack/settings#L38-L44

see 
https://github.com/openstack/networking-ovs-dpdk/blob/master/doc/source/usage.rst
for documentation and examples of how to use these parameters to customize the 
deployment.

For example 
OVS_BRIDGE_MAPPINGS=default:br-01,default1:br-02
OVS_DPDK_PORT_MAPPINGS=eth1:br-01,eth2:br-01,eth3:br-02

Will create 2 phyical bridges connected to the br-int.

Eth1 and eth2 will be added to the first bridge br-01 and eth3 will be added to 
the second bridge br-02

If you wanted to bond eth1 and eth2 instead you could do so as follows

OVS_BRIDGE_MAPPINGS=default:br-01,default1:br-02
OVS_BOND_MODE=bond0:active-backup
OVS_BOND_PORTS=bond0:eth1,bond0:eth2
OVS_DPDK_PORT_MAPPINGS=bond0:br-01,eth3:br-02

The follow review contains some updated Getting Started Guides for ubuntu which 
may help with your deployment
https://review.openstack.org/#/c/243190/

I have added the openstack dev mailing list.
If you have any follow up questions feel free to reach out to me there.

regards
sean.





-Original Message-
From: Dpdk-ovs [mailto:dpdk-ovs-boun...@lists.01.org] On Behalf Of Varun Rapelly
Sent: Monday, November 2, 2015 11:14 AM
To: dpdk-...@lists.01.org
Subject: [Dpdk-ovs] networking-ovs-dpdk configuration on devstack

Hi All,



I’m trying to install networking-ovs-dpdk with openstack (devstack) on Ubuntu 
14.04 (3.13 kernel) on a single node.



Used following steps:

1.   echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers

2.   apt-get install sudo -y

3.   apt-get install sudo -y || yum install -y sudo

4.   sudo apt-get install git -y || sudo yum install -y git

5.   git clone https://git.openstack.org/openstack-dev/devstack

6.   cd devstack

7.   ./stack.sh



stack@ubuntu:~/devstack$ ifconfig

eth0  Link encap:Ethernet  HWaddr 52:55:00:00:00:01

  inet addr:10.54.218.56  Bcast:10.54.218.255  Mask:255.255.255.0

  inet6 addr: fe80::5055:ff:fe00:1/64 Scope:Link

  UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

  RX packets:104 errors:0 dropped:0 overruns:0 frame:0

  TX packets:102 errors:0 dropped:0 overruns:0 carrier:0

  collisions:0 txqueuelen:1000

  RX bytes:10537 (10.5 KB)  TX bytes:14847 (14.8 KB)



I have two other eth1,2 ports on the machine.



I used 
https://github.com/openstack/networking-ovs-dpdk/blob/master/doc/source/_downloads/local.conf.single_node
file as local.conf

and copied the

same to devstack directory and updated with following details.



I’m facing problem with following configuration.



stack@ubuntu:~/devstack$ cat local.conf

#All in one single node config


Re: [openstack-dev] [networking-ovs-dpdk]

2015-11-05 Thread Mooney, Sean K
Hello
When set OVS_DPDK_MODE=controller_ovs

You are disabling install of ovs-dpdk on the contoler node and only installing 
mechanism driver.

If you want to install ovs-dpdk on the controller node you should set this 
value as follows

OVS_DPDK_MODE=controller_ovs_dpdk

See 
https://github.com/openstack/networking-ovs-dpdk/blob/master/doc/source/_downloads/local.conf.single_node

ovs with dpdk will be installed in /usr/bin not user local bin as it does a 
system wide install not a local install.

Installation documentation can be found here
https://github.com/openstack/networking-ovs-dpdk/tree/master/doc/source

the networking-ovs-dpdk repo has been recently moved from stackforge to the 
openstack namespace following the
retirement of stackforge.

Some like in the git repo still need to be updated to reflect this change.

Regards
sean
-Original Message-
From: Prathyusha Guduri [mailto:prathyushaconne...@gmail.com] 
Sent: Thursday, November 5, 2015 11:02 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [networking-ovs-dpdk]

Hello all,

Trying to install openstack with ovs-dpdk driver from devstack.

Following is my localrc file

HOST_IP_IFACE=eth0
HOST_IP=10.0.2.15
HOST_NAME=$(hostname)

DATABASE_PASSWORD=open
RABBIT_PASSWORD=open
SERVICE_TOKEN=open
SERVICE_PASSWORD=open
ADMIN_PASSWORD=open
MYSQL_PASSWORD=open
HORIZON_PASSWORD=open


enable_plugin networking-ovs-dpdk
https://github.com/stackforge/networking-ovs-dpdk master 
OVS_DPDK_MODE=controller_ovs

disable_service n-net
disable_service n-cpu
enable_service neutron
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
enable_service n-novnc

DEST=/opt/stack
SCREEN_LOGDIR=$DEST/logs/screen
LOGFILE=${SCREEN_LOGDIR}/xstack.sh.log
LOGDAYS=1

Q_ML2_TENANT_NETWORK_TYPE=vlan
ENABLE_TENANT_VLANS=True
ENABLE_TENANT_TUNNELS=False

#Dual socket platform with 16GB RAM,3072*2048kB hugepages leaves ~4G for the 
system.
OVS_NUM_HUGEPAGES=2048
#Dual socket platform with 64GB RAM,14336*2048kB hugepages leaves ~6G for the 
system.
#OVS_NUM_HUGEPAGES=14336

OVS_DATAPATH_TYPE=netdev
OVS_LOG_DIR=/opt/stack/logs
OVS_BRIDGE_MAPPINGS=public:br-ex

ML2_VLAN_RANGES=public:100:200
MULTI_HOST=1

#[[post-config|$NOVA_CONF]]
#[DEFAULT]
firewall_driver=nova.virt.firewall.NoopFirewallDriver
novncproxy_host=0.0.0.0
novncproxy_port=6080
scheduler_default_filters=RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter,NUMATopologyFilter


After running ./stack.sh which was sucessful , I could see that in ml2.conf.ini 
file ovsdpdk was added as the mechanism driver. But the agent running was still 
openvswitch. Tried running ovsdpdk on q-agt screen, but failed because ovsdpdk 
was not installed in /usr/local/bin, which I thought devstack is supposed to do.
Tried running setup.py in networking-ovs-dpdk folder, but that also did not 
install ovs-dpdk in /usr/local/bin.

Am stuck here. Please guide me how to proceed further. Also the Readme in 
networking-ovs-dpdk folder says the instructions regarding installation are 
available in below links - 
http://git.openstack.org/cgit/stackforge/networking-ovs-dpdk/tree/doc/source/installation.rst

But no repos found there. Kindly guide me to a doc or something on how to build 
ovs-dpdk from devstack

Thank you,
Prathyusha

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-ovs-dpdk]

2015-11-05 Thread Mooney, Sean K
Integration with packsack is not currently supported.
We currently have the first step( a puppet module to install ovs-dpdk) under 
review.
At present we are not directly targeting packstack support but if anyone wants 
to add support
It would be welcomed. At present devstack is the only fully supported 
deployment tool.

Support for Ubuntu 14.04 and centos 7.1 was recently added.
Automated Testing is done by the intel-networking-ci using fedora 21 but
We have manually tested Ubuntu and centos.

I currently have a ovs-dpdk deployed on Ubuntu 14.04 on one of my dev systems 
using the devstack.

Our current getting start guide just describes fedora 21 deployment but we 
should be adding a Ubuntu and centOS version soon.

As far as I recall the main changes for Ubuntu are

-  Instead of setting selinux to permissive uninstall apparmor.

-  Instead of enabling the virt preview repo enable the kilo Ubuntu  
cloud archive.

Note that as devstack installs from source the openstack packages from the kilo 
cloud archive
are not used it is enabled to provided updated Libvirt and qemu packages only.
As such kilo,liberty and master openstack should be deployable by devstack.

It should also be noted that due to changes in upstream neutron the stable/x 
branch of the networking-ovs-dpdk repo
Is only compatible with the stable/x release of opentack.

Regards
Sean.




From: Rapelly, Varun [mailto:vrape...@sonusnet.com]
Sent: Thursday, November 5, 2015 12:04 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [networking-ovs-dpdk]

Hi All,

Can we use https://github.com/openstack/networking-ovs-dpdk with packstack??

I'm trying to configure devstack with ovs-dpdk on ubuntu. But till now no 
success.

Could anybody tell whether it is supported on ubuntu or not? or only on Fedora 
it is tested?


Regards,
Varun

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] revert default review branch to master

2015-08-18 Thread Mooney, Sean K
Since Doug Wiegley's change has merged 
https://review.openstack.org/#/c/213843/
Ill Abandon mine and close/update the bug accordingly.

I was a little confused when I rebased and ended up pushing a review to 
feature/qos instead of master.

Regards
sean

-Original Message-
From: Miguel Angel Ajo [mailto:mangel...@redhat.com] 
Sent: Tuesday, August 18, 2015 9:47 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] revert default review branch to master

Thanks for handling this so quickly Sean!

Kyle Mestery wrote:
 On Mon, Aug 17, 2015 at 5:58 PM, Jeremy Stanleyfu...@yuggoth.org  wrote:

 On 2015-08-17 22:39:56 + (+), Mooney, Sean K wrote:
 [...]
 Assuming this was not intentional I have opened a bug and submitted 
 a patch to revert this change.
 [...]

 Fix https://review.openstack.org/213843 is already winding its way 
 through the sausage factory.


 Yes, this was missed during the merge back of feature/qos, of which I 
 approved the merge commit. Thanks to Doug for jumping on this with 
 213843 here, which is almost finished having it's casing added.

 --
 Jeremy Stanley

 _
 _ OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 __
  OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: 
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] revert default review branch to master

2015-08-17 Thread Mooney, Sean K
Hi I just noticed the default review branch for neutron has been updated to the 
feature/qos branch

Assuming this was not intentional I have opened a bug and submitted a patch to 
revert this change.

https://bugs.launchpad.net/neutron/+bug/1485788
https://review.openstack.org/#/c/213908/

regards
sean
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] service chaining feature development meeting at 10am pacific time June 11

2015-06-11 Thread Mooney, Sean K
Hi can anyone provide a link to today's irc meeting logs?

Looking at http://eavesdrop.openstack.org/meetings/ I cannot see a 
Neutron_Service_Chaining_meeting directory
Are the logs being stored in 
http://eavesdrop.openstack.org/meetings/service_chaining/2015/ ?
If so the logs for today's meeting seem to be missing.

Regards
sean
From: Cathy Zhang [mailto:cathy.h.zh...@huawei.com]
Sent: Thursday, June 11, 2015 1:45 AM
To: Cathy Zhang; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] service chaining feature development 
meeting at 10am pacific time June 11

Here is the updated agenda for tomorrow's meeting:


1. Update on last meeting's action items

2.   Neutron port chain API for SFC

3.   Unified API and data model for flow classifier that can be used for 
SFC, QoS, Packet forwarding etc.

4. Summary on the SFC Feature project scope: functional module breakdown 
and their architecture relationship, and Module development ownership sign-up

5. Deep dive into technical questions on etherpad if there is time.

Thanks,
Cathy

From: Cathy Zhang
Sent: Wednesday, June 10, 2015 3:23 PM
To: Cathy Zhang; OpenStack Development Mailing List (not for usage questions)
Subject: RE: [openstack-dev] [Neutron] service chaining feature development 
meeting at 10am pacific time June 11

Add the following item to the agenda:

unified data model for flow classifiers

Thanks,
Cathy

From: Cathy Zhang
Sent: Wednesday, June 10, 2015 3:12 PM
To: OpenStack Development Mailing List (not for usage questions); Cathy Zhang
Subject: [openstack-dev] [Neutron] service chaining feature development meeting 
at 10am pacific time June 11

Hello everyone,

Our next weekly IRC meeting for the OpenStack service chain feature development 
is 10am pacific time June 11 (UTC 1700, hope I am doing the correct time 
conversion this time)  Following is the meeting info:

Weekly on Thursday at 1700 
UTChttp://www.timeanddate.com/worldclock/fixedtime.html?hour=18min=00sec=0 
in #openstack-meeting-4

You can also find the meeting info at 
http://eavesdrop.openstack.org/#Neutron_Service_Chaining_meeting

Agenda:

6. Update on last meeting's action items

7. Summary on the SFC Feature project scope: functional module breakdown 
and their architecture relationship as well as their relationship to OpenStack 
Neutron, the types of service functions that will be chained in this feature 
development

8. Module development ownership sign-up

9. Deep dive into technical questions on etherpad if there is time.


Anyone who would like to contribute to this feature development is welcome to 
join the meeting. Hope the time is good for most people.





Thanks,

Cathy



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [new][cloudpulse] Announcing a project to HealthCheck OpenStack deployments

2015-05-13 Thread Mooney, Sean K
Will cloudpulse be under the governance of the OpenStack Telemetry program
Or will this be an independent StackFoge repository?

I think there would be great value in having cloud monitoring  or monitoring as 
a service
In the telemetry program.
Regards
Sean.

-Original Message-
From: Steven Dake (stdake) [mailto:std...@cisco.com] 
Sent: Wednesday, May 13, 2015 3:39 PM
To: Julien Danjou
Cc: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [new][cloudpulse] Announcing a project to 
HealthCheck OpenStack deployments



On 5/12/15, 1:28 PM, Julien Danjou jul...@danjou.info wrote:

On Tue, May 12 2015, Steven Dake (stdake) wrote:

 This is a great idea that would make a solid extension to the software.
 If I read the wiki page correctly, the real goal is for operators and  
tenants to be able to be notified via querying the ReST API so they 
could  write their own email/pager-duty app.

Then leveraging Ceilometer polling and alarming systems could make you 
avoid reinventing a large portion of the wheel.

Julien,

Reading the wiki page, I don¹t expect there would be a need for an agent.
But who knows, atm, all the software is is a wiki page ;)  If there were a need 
for agents, the project would definitely use the ceilometer agents and extend 
there if needed via the normal development process.

Regards
-steve


--
Julien Danjou
// Free Software hacker
// http://julien.danjou.info


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [libvirt] enabling per node filtering of mempage sizes

2014-12-03 Thread Mooney, Sean K
Hi 

Unfortunately a flavor + aggregate is not enough for our use case as it is 
still possible for the tenant to misconfigure a vm.

The edge case not covered by flavor + aggregate that we are trying to prevent 
is as follows.

The operator creates an aggregate containing the  nodes that require all VMs to 
use large pages.
The operator creates flavors with and without memory backing specified.

The tenant selects the aggregate containing nodes that only supports hugepages 
and a flavor that requires small or any.
Or 
The tenant selects a flavor that requires small or any and does not select an 
aggregate.

In both cases because the nodes may have non huge page memory available, it is 
possible to schedule
A vm that will not use large pages to a node that requires large pages to be 
used.

If this happens the behavior is undefined.
The vm may boot and have not network connectivity in the case of vhost-user
The vm may fail to boot  or it may boot in some other error state.

It would be possible however to introduce a new filter 
(AggregateMemoryBackingFilter)

The AggregateMemoryBackingFilter would work as follows.
The AggregateMemoryBackingFilter  will compare the extra specifications 
associated with the instance and enforces the constraints set in the aggregate 
metadata.

A new MemoryBacking attribute will be added to the aggregate metadata.
The MemoryBacking  attribute can be set to 1 or more of the flowing: 
small,large,4,2048,1048576
Syntax is SizeA,SizeB e.g. 2048,1048576

If small is set then host will only be passed if the vm requests small or 4k 
pages.
If large is set then host will only be passed if the vm requests  2MB or 1GB.
If the MemoryBacking element is not set for an aggregate the 
AggregateMemoryBackingFilter  will pass all hosts

With this new filter the (flavor or image properties) + aggregate approach 
would work for all driver not just libvirt.

If this alternative is preferred I can resubmit as a new blueprint and mark the 
old blueprint as superseded.

Regards 
Sean.

-Original Message-
From: Daniel P. Berrange [mailto:berra...@redhat.com] 
Sent: Wednesday, December 3, 2014 10:13 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [nova] [libvirt] enabling per node filtering of 
mempage sizes

On Wed, Dec 03, 2014 at 10:03:06AM +0100, Sahid Orentino Ferdjaoui wrote:
 On Tue, Dec 02, 2014 at 07:44:23PM +, Mooney, Sean K wrote:
  Hi all
  
  I have submitted a small blueprint to allow filtering of available 
  memory pages Reported by libvirt.
 
 Can you address this with aggregate? this will also avoid to do 
 something specific in the driver libvirt. Which will have to be 
 extended to other drivers at the end.

Agreed, I think you can address this by setting up host aggregates and then 
using setting the desired page size on the flavour.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [libvirt] enabling per node filtering of mempage sizes

2014-12-03 Thread Mooney, Sean K
Hi Daniel thanks for your feedback.

After reading up a little more 
http://docs.openstack.org/openstack-ops/content/scaling.html#segragation_methods
I now understand your original suggestion.

I believe that if the operator  associates the aggregate directly to the flavor
As you suggested that yes this will cover my use case too as the tenant is 
selecting availability zones
Not host aggregates.

Sorry I had misconstrued the relationship between availability zones and host 
aggregates.
I believed that there was a one to one mapping ,so when you select an 
availability zone you were
Selecting the host aggregate directly.

Regards
Sean.

 -Original Message-
 From: Daniel P. Berrange [mailto:berra...@redhat.com]
 Sent: Wednesday, December 03, 2014 1:38 PM
 To: Mooney, Sean K
 Cc: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [nova] [libvirt] enabling per node 
 filtering of mempage sizes
 
 On Wed, Dec 03, 2014 at 01:28:36PM +, Mooney, Sean K wrote:
  Hi
 
  Unfortunately a flavor + aggregate is not enough for our use case as 
  it is still
 possible for the tenant to misconfigure a vm.
 
  The edge case not covered by flavor + aggregate that we are trying 
  to prevent
 is as follows.
 
  The operator creates an aggregate containing the  nodes that require 
  all VMs
 to use large pages.
  The operator creates flavors with and without memory backing specified.
 
  The tenant selects the aggregate containing nodes that only supports
 hugepages and a flavor that requires small or any.
  Or
  The tenant selects a flavor that requires small or any and does not 
  select an
 aggregate.
 
 The tenant isn't responsible for selecting the aggregate. The operator 
 should be associating the aggregate directly to the flavour. So the 
 tenant merely has to select the right flavour.
 
 Regards,
 Daniel
 --
 |: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
 |: http://libvirt.org  -o- http://virt-manager.org :|
 |: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
 |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [libvirt] enabling per node filtering of mempage sizes

2014-12-02 Thread Mooney, Sean K
Hi all

I have submitted a small blueprint to allow filtering of available memory pages 
Reported by libvirt.

https://blueprints.launchpad.net/nova/+spec/libvirt-allowed-mempage-sizes

I believe that this change is small enough to not require a spec as per
http://docs.openstack.org/developer/nova/devref/kilo.blueprints.html

if a core (and others are welcome too :)) has time to review my blueprint and 
confirm
that a spec is not required I would be grateful as the spd is rapidly 
approaching

I have wip code developed which I hope to make available for review once
I add unit tests.

All relevant detail (copied below) are included in the whiteboard for the 
blueprint.

Regards
Sean

Problem description
===

In the Kilo cycle, the virt drivers large pages feature[1] was introduced
to allow a guests to request the type of memory backing that they desire
via a flavor or image metadata.

In certain configurations, it may be desired or required to filter the
memory pages available to vms booted on a node. At present no mechanism
exists to allow filtering of reported memory pages.

Use Cases
--

On a host that only supports vhost-user or ivshmem,
all VMs are required to use large page memory.
If a vm is booted with standard pages with these interfaces,
network connectivity will not available.

In this case it is desirable to filter out small/4k pages when reporting
available memory to the scheduler.

Proposed change
===

This blueprint proposes adding a new config variable (allowed_memory_pagesize)
to the libvirt section of the nova.conf.

cfg.ListOpt('allowed_memory_pagesize',
default=['any'],
help='List of allowed memory page sizes'
 'Syntax is SizeA,SizeB e.g. small,large'
 'valid sizes are: small,large,any,4,2048,1048576')

The _get_host_capabilities function in nova/nova/virt/libvirt/driver.py
will be modified to filter the mempages reported for each cell based on the
value of CONF.libvirt.allowed_memory_pagesize

If small is set then only 4k pages will be reported.
If large is set 2MB and 1GB will be reported.
If any is set no filtering will be applied.

The default value of any was chosen to ensure that this change has no effect 
on
existing deployment.

References
==
[1] - https://blueprints.launchpad.net/nova/+spec/virt-driver-large-pages

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] The future of the integrated release

2014-08-22 Thread Mooney, Sean K
I would have to agree with Thomas.
Many organizations have already worked out strategies and have processes in 
place to cover contributing to  OpenStack which
Cover all official project. Contributing to additional non-OpenStack projects 
may introduce additional barriers in large 
Organizations which require  ip plan/legal approval on a per project basis.

Regards
sean 
-Original Message-
From: Duncan Thomas [mailto:duncan.tho...@gmail.com] 
Sent: Friday, August 22, 2014 4:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] The future of the integrated release

On 21 August 2014 19:39, gordon chung g...@live.ca wrote:
 from the pov of a project that seems to be brought up constantly and 
 maybe it's my naivety, i don't really understand the fascination with 
 branding and the stigma people have placed on 
 non-'openstack'/stackforge projects. it can't be a legal thing because 
 i've gone through that potential mess. also, it's just as easy to contribute 
 to 'non-openstack' projects as 'openstack'
 projects (even easier if we're honest).

It may be easier for you, but it certainly isn't inside big companies, e.g. HP 
have pretty broad approvals for contributing to (official) openstack projects, 
where as individual approval may be needed to contribute to none-openstack 
projects.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][Spec Freeze Exception]Support dpdkvhost in ovs vif bindings

2014-07-23 Thread Mooney, Sean K
Hi
The third iteration of the specs are now available for review at the links below

https://blueprints.launchpad.net/nova/+spec/libvirt-ovs-use-usvhost

https://blueprints.launchpad.net/neutron/+spec/ml2-use-dpdkvhost
thanks for the feedback given so far.
Hopefully the current iteration addresses the issues raised.

Regards
Sean.


From: Czesnowicz, Przemyslaw
Sent: Friday, July 18, 2014 1:03 PM
To: openstack-dev@lists.openstack.org
Cc: Mooney, Sean K; Hoban, Adrian
Subject: [openstack-dev][nova][Spec Freeze Exception]Support dpdkvhost in ovs 
vif bindings

Hi Nova Cores,

I would like to ask for spec approval deadline exception for:
https://review.openstack.org/#/c/95805/2

This feature allows using DPDK enabled Open vSwitch with Openstack.
This is an important feature for NFV workloads that require high performance 
network I/O.

If the spec is approved, implementation should be straight forward and should 
not disrupt any other work happening in Nova.


Thanks,
Przemek


--
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] [Spec freeze exception] ml2-use-dpdkvhost

2014-07-23 Thread Mooney, Sean K
Hi kyle

Thanks for your provisional support.
I would agree that unless the nova spec is also granted an exception both specs 
should be moved
To Kilo.

I have now uploaded the most recent version of the specs.
They are available to review here:
https://blueprints.launchpad.net/nova/+spec/libvirt-ovs-use-usvhost
https://blueprints.launchpad.net/neutron/+spec/ml2-use-dpdkvhost

regards
sean


-Original Message-
From: Kyle Mestery [mailto:mest...@mestery.com] 
Sent: Tuesday, July 22, 2014 2:47 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] [Spec freeze exception] ml2-use-dpdkvhost

On Mon, Jul 21, 2014 at 10:04 AM, Mooney, Sean K sean.k.moo...@intel.com 
wrote:
 Hi

 I would like to propose
 https://review.openstack.org/#/c/107797/1/specs/juno/ml2-use-dpdkvhost
 .rst
 for a spec freeze exception.



 https://blueprints.launchpad.net/neutron/+spec/ml2-use-dpdkvhost



 This blueprint adds support for the Intel(R) DPDK Userspace vHost

 port binding to the Open Vswitch and Open Daylight ML2 Mechanism Drivers.

In general, I'd be ok with approving an exception for this BP.
However, please see below.



 This blueprint enables nova changes tracked by the following spec:

 https://review.openstack.org/#/c/95805/1/specs/juno/libvirt-ovs-use-us
 vhost.rst

This BP appears to also require an exception from the Nova team. I think these 
both require exceptions for this work to have a shot at landing in Juno. Given 
this, I'm actually leaning to move this to Kilo. But if you can get a Nova 
freeze exception, I'd consider the same for the Neutron BP.

Thanks,
Kyle



 regards

 sean

 --
 Intel Shannon Limited
 Registered in Ireland
 Registered Office: Collinstown Industrial Park, Leixlip, County 
 Kildare Registered Number: 308263 Business address: Dromore House, 
 East Park, Shannon, Co. Clare

 This e-mail and any attachments may contain confidential material for 
 the sole use of the intended recipient(s). Any review or distribution 
 by others is strictly prohibited. If you are not the intended 
 recipient, please contact the sender and delete all copies.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [Spec freeze exception] ml2-use-dpdkvhost

2014-07-21 Thread Mooney, Sean K
Hi
I would like to propose  
https://review.openstack.org/#/c/107797/1/specs/juno/ml2-use-dpdkvhost.rst for 
a spec freeze exception.

https://blueprints.launchpad.net/neutron/+spec/ml2-use-dpdkvhost

This blueprint adds support for the Intel(R) DPDK Userspace vHost
port binding to the Open Vswitch and Open Daylight ML2 Mechanism Drivers.

This blueprint enables nova changes tracked by the following spec:
https://review.openstack.org/#/c/95805/1/specs/juno/libvirt-ovs-use-usvhost.rst

regards
sean
--
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare

This e-mail and any attachments may contain confidential material for the sole 
use of the intended recipient(s). Any review or distribution by others is 
strictly prohibited. If you are not the intended recipient, please contact the 
sender and delete all copies.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >