Re: [openstack-dev] [nova] Should we delete the (unexposed) os-pci API?

2017-03-17 Thread Ed Leafe
On Mar 17, 2017, at 3:23 PM, Sean Dague  wrote:

>> So I move that we delete the (dead) code. Are there good reasons not to?
> 
> Yes... with fire.
> 
> Realistically this was about the pinnacle of the extensions on
> extensions API changes, which is why we didn't even let it into v2 in
> the first place.

Fire does not seem strong enough.


-- Ed Leafe







signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] introducing python-tripleo-helper

2017-03-17 Thread Emilien Macchi
Gonéri, any update on this one?

On Mon, Mar 6, 2017 at 2:28 PM, Emilien Macchi  wrote:
> On Fri, Feb 24, 2017 at 3:02 PM, Gonéri Le Bouder  wrote:
>> Hi all,
>>
>> TL/DR:
>>
>> - use Python to do drive TripleO deployment
>> - RPM based
>> - used to be a bit specific, closer to upstream now (RDO)
>> - unit-test
>> - maybe a good candidate to join the TripleO umbrella
>>
>> Following a discussion with Emilien, I would like to introduce
>> python-tripleo-helper.
>>
>> python-tripleo-helper is a Python library that wrapper all the
>> operations required to get a working TripleO. The initial goal was to
>> have a solution to automate and validate the deployment of TripleO in
>> our lab environment.
>>
>> Since the full deployment flow is based on a modern programming
>> language, it's also possible to write more complex operations.
>>
>> For instance, this is a test that I did some month ago:
>> Once the Overcloud was running, we started a "chaos monkey" thread that
>> was randomly disconnecting controllers. We were running tempest in the
>> main thread to collect results. Python makes all these interactions
>> easy.
>>
>> At this point, we support libvirt and the OpenStack as the target
>> environment. We use a couple of hacks to be able to use a regular
>> OpenStack cloud (OSP7). bare-metal is not supported yet even if it's
>> definitely a low-hanging fruit.
>>
>> python-tripleo-helper relies on RPM packages but we are open to
>> contribution to support the other packaging solutions.
>>
>> Till some weeks ago python-tripleo-helper was not really generical. This
>> is the reason why it's still maintained in the redhat-openstack
>> namespace. Nicolas Hicher pushed a couple of patches to be able to use
>> it with RDO and CentOS, I guess we can now consider a move to the
>> TripleO umbrella.
>
> Have you investigated what could (and / or couldn't if applies) reside
> in tripleoclient / tripleo-common instead of having a new tool?
>
> If we found out that we need this new tool in TripleO, let's move it
> upstream. Otherwise, I'm in favor of consolidating tools and re-use
> our existing workflow & client tools.
>
> Thanks,
>
>> --
>> Gonéri Le Bouder
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][gate][all] dsvm gate stability and scenario tests

2017-03-17 Thread Clark Boylan
On Fri, Mar 17, 2017, at 08:23 AM, Jordan Pittier wrote:
> On Fri, Mar 17, 2017 at 3:11 PM, Sean Dague  wrote:
> 
> > On 03/17/2017 09:24 AM, Jordan Pittier wrote:
> > >
> > >
> > > On Fri, Mar 17, 2017 at 1:58 PM, Sean Dague  > > > wrote:
> > >
> > > On 03/17/2017 08:27 AM, Jordan Pittier wrote:
> > > > The patch that reduced the number of Tempest Scenarios we run in
> > every
> > > > job and also reduce the test run concurrency [0] was merged 13
> > days ago.
> > > > Since, the situation (i.e the high number of false negative job
> > results)
> > > > has not improved significantly. We need to keep looking
> > collectively at
> > > > this.
> > >
> > > While the situation hasn't completely cleared out -
> > > http://tinyurl.com/mdmdxlk - since we've merged this we've not seen
> > that
> > > job go over 25% failure rate in the gate, which it was regularly
> > > crossing in the prior 2 week period. That does feel like progress. In
> > > spot checking I we are also rarely failing in scenario tests now, but
> > > the fails tend to end up inside heavy API tests running in parallel.
> > >
> > >
> > > > There seems to be an agreement that we are hitting some memory
> > limit.
> > > > Several of our most frequent failures are memory related [1]. So we
> > > > should either reduce our memory usage or ask for bigger VMs, with
> > more
> > > > than 8GB of RAM.
> > > >
> > > > There was/is several attempts to reduce our memory usage, by
> > reducing
> > > > the Mysql memory consumption ([2] but quickly reverted [3]),
> > reducing
> > > > the number of Apache workers ([4], [5]), more apache2 tuning [6].
> > If you
> > > > have any crazy idea to help in this regard, please help. This is
> > high
> > > > priority for the whole openstack project, because it's plaguing
> > many
> > > > projects.
> > >
> > > Interesting, I hadn't seen the revert. It is also curious that it was
> > > largely limitted to the neutron-api test job. It's also notable that
> > the
> > > sort buffers seem to have been set to the minimum allowed limit of
> > mysql
> > > -
> > > https://dev.mysql.com/doc/refman/5.6/en/innodb-parameters.
> > html#sysvar_innodb_sort_buffer_size
> > >  > html#sysvar_innodb_sort_buffer_size>
> > > - and is over an order of magnitude decrease from the existing
> > default.
> > >
> > > I wonder about redoing the change with everything except it and
> > seeing
> > > how that impacts the neutron-api job.
> > >
> > > Yes, that would be great because mysql is by far our biggest memory
> > > consumer so we should target this first.
> >
> > While it is the single biggest process, weighing in at 500 MB, the
> > python services are really our biggest memory consumers. They are
> > collectively far outweighing either mysql or rabbit, and are the reason
> > that even with 64MB guests we're running out of memory. So we want to
> > keep that under perspective.
> >
> Absolutely. I have https://review.openstack.org/#/c/446986/ in that vain.
> And if someone wants to start the work of not running the several Swift
> *auditor*, *updater*, *reaper*, *replicator* services, in case the Swift
> Replication factor is set to 1, that's also a good memory saving.

I've currently got https://review.openstack.org/#/c/447119/ up to enable
kernel samepage merging if devstack is configured to use libvirt. I've
got details in change comments but I think it may save about 150MB in
peak memory use.

I've also got https://review.openstack.org/#/c/446741/ to tune back
apache's worker and connection counts with details on savings also on
the change. This is much smaller but its a simple change so probably
worthwhile.

Feedback very welcome.

With that said I agree we need individuals familiar with specific
services focusing on trimming those back too. The python services are
the biggest memory users here. We are only going to be able to disable
so many services before we need to modify the services we actually need
running.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [lbaas][neutron] Is possible the lbaas commands are missing completely from openstack cli ?

2017-03-17 Thread Michael Johnson
Yes, as previously announced, we deferred development of the OpenStack
Client (OSC) until Pike.
Work has started on the OSC plugin for Octavia and we expect it to be
available in Pike.

Neutron CLI is deprecated which means it will go away in the future, but is
still available for use and is still available on the stable branches for
previous releases.

Michael


-Original Message-
From: Saverio Proto [mailto:saverio.pr...@switch.ch] 
Sent: Friday, March 17, 2017 6:21 AM
To: OpenStack Development Mailing List (not for usage questions)

Subject: [openstack-dev] [lbaas][neutron] Is possible the lbaas commands are
missing completely from openstack cli ?

Client version: openstack 3.9.0

I cant find any lbaas commands. I have to use the 'neutron' client.

Everycommand I get:
neutron CLI is deprecated and will be removed in the future. Use openstack
CLI instead.

Is LBaaS even going to be implemented in the unified openstack client ?

thank you

Saverio

--
SWITCH
Saverio Proto, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15,
direct +41 44 268 1573 saverio.pr...@switch.ch, http://www.switch.ch

http://www.switch.ch/stories

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Should we delete the (unexposed) os-pci API?

2017-03-17 Thread Jay Pipes

On 03/17/2017 04:23 PM, Sean Dague wrote:

On 03/17/2017 04:19 PM, Matt Riedemann wrote:

I was working on writing a spec for a blueprint [1] that would have
touched on the os-pci API [2] and got to documenting about how it's not
even documented [3] when Alex pointed out that the API is not even
enabled [4][5].

It turns out that the os-pci API was added in the Nova V3 API and pulled
back out, and [5] was a tracking bug to add it back in with a
microversion, and that never happened.

Given the ugliness described in [3], and that I think our views on
exposing this type of information have changed [6] since it was
originally added, I'm proposing that we just delete the API code.

The API code itself was added back in Icehouse [7].

I tend to think if someone cared about needing this information in the
REST API, they would have asked for it by now. As it stands, it's just
technical debt and even if we did expose it, there are existing issues
in the API, like the fact that the os-hypervisors extension just takes
the compute_nodes.pci_stats dict and dumps it to json out of the REST
API with no control over the keys in the response. That means if we ever
change the fields in the PciDevicePool object, we implicitly introduce a
backward incompatible change in the REST API.

So I move that we delete the (dead) code. Are there good reasons not to?

[1]
https://blueprints.launchpad.net/nova/+spec/service-hyper-pci-uuid-in-api
[2]
https://github.com/openstack/nova/blob/15.0.0/nova/api/openstack/compute/pci.py

[3] https://bugs.launchpad.net/nova/+bug/1673869
[4] https://github.com/openstack/nova/blob/15.0.0/setup.cfg#L132
[5] https://bugs.launchpad.net/nova/+bug/1426241
[6]
https://docs.openstack.org/developer/nova/policies.html?highlight=metrics#metrics-gathering

[7] https://review.openstack.org/#/c/51135/



Yes... with fire.

Realistically this was about the pinnacle of the extensions on
extensions API changes, which is why we didn't even let it into v2 in
the first place.


++

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][blazar][ceilometer][congress][intel-nfv-ci-tests][ironic][manila][networking-bgpvpn][networking-fortinet][networking-sfc][neutron][neutron-fwaas][neutron-lbaas][nova-lxd][octa

2017-03-17 Thread Andrea Frittoli
On Fri, Mar 17, 2017 at 5:43 PM Sarabia, Solio 
wrote:

> Hi.
>
> We’ve completed integrating manager.py into openstack/ironic.
>
> https://review.openstack.org/#/c/439252/ (include local copy)
>
> https://review.openstack.org/#/c/446844/ (prune local copy)
>

Thank you, much appreciated!


>
>
> -Solio
>
>
>
> *From: *Andrea Frittoli 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Tuesday, March 7, 2017 at 3:28 PM
> *To: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> *Subject: *Re: [openstack-dev]
> [QA][blazar][ceilometer][congress][intel-nfv-ci-tests][ironic][manila][networking-bgpvpn][networking-fortinet][networking-sfc][neutron][neutron-fwaas][neutron-lbaas][nova-lxd][octavia][sahara][tap-as-a-service][horizon][vmware-nsx][...
>
>
>
> Hi,
>
>
>
> an update on this.
>
>
>
> It's about 10days since the original message, and the current status is:
>
> - 3 patches merged, 1 approved (recheck)
>
> - 5 patches submitted, pending approval
>
> - 2 patches with a -1 (need more work)
>
> - 7 patches submitted by me today (draft) - review needed
>
>
>
> Thank you for your work on this!
>
>
>
> I would recommend to prune the imported module as much as possible as well.
>
> It would make it easier for the QA team to identify which interfaces on
> Tempest side should be migrated to stable.
>
>
>
> andrea
>
>
>
> On Wed, Mar 1, 2017 at 1:25 PM Andrea Frittoli 
> wrote:
>
> On Wed, Mar 1, 2017 at 2:21 AM Takashi Yamamoto 
> wrote:
>
> hi,
>
> On Mon, Feb 27, 2017 at 8:34 PM, Andrea Frittoli
>  wrote:
> > Hello folks,
> >
> > TL;DR: if today you import manager,py from tempest.scenario please
> maintain
> > a copy of [0] in tree until further notice.
> >
> > Full message:
> > --
> >
> > One of the priorities for the QA team in the Pike cycle is to refactor
> > scenario tests to a sane code base [1].
> >
> > As they are now, changes to scenario tests are difficult to develop and
> > review, and failures in those tests are hard to debug, which is in many
> > directions far away from where we need to be.
> >
> > The issue we face is that, even though tempest.scenario.manager is not
> > advertised as a stable interface in Tempest, many project use it today
> for
> > convenience in writing their own tests. We don't know about dependencies
> > outside of the OpenStack ecosystem, but we want to try to make this
> refactor
> > a smooth experience for our uses in OpenStack, and avoid painful gate
> > breakages as much as possible.
> >
> > The process we're proposing is as follows:
> > - hold a copy of [0] in tree - in most cases you won't even have to
> change
> > your imports as a lot of projects use tempest/scenario in their code
> base.
> > You may decide to include the bare minimum you need from that module
> instead
> > of all of it. It's a bit more work to make the patch, but less un-used
> code
> > lying around afterwards.
>
> i submitted patches for a few repos.
>
> https://review.openstack.org/#/q/status:open++branch:master+topic:tempest-manager
> i'd suggest to use the same gerrit topic for relevant patches.
>
> Thank you for looking into this!
>
> Having a common gerrit topic is a nice idea: "tempest-manager"
>
>
>
> I'm also tracking patches in this etherpad:
> https://etherpad.openstack.org/p/tempest-manager-plugins
>
>
>
> andrea
>
>
>
> > - the QA team will refactor scenario tests, and make more interfaces
> stable
> > (test.py, credential providers). We won't advertise every single change
> in
> > this process, only when we start and once we're done.
> > - you may decide to discard your local copy of manager.py and consume
> > Tempest stable interfaces directly. We will help with any question you
> may
> > have on the process and on Tempest interfaces.
> >
> > Repositories affected by the refactor are (based on [2]):
> >
> >
> blazar,ceilometer,congress,intel-nfv-ci-tests,ironic,manila,networking-bgpvpn,networking-fortinet,networking-sfc,neutron-fwaas,neutron-lbaas,nova-lxd,octavia,sahara-tests,tap-as-a-service,tempest-horizon,vmware-nsx,watcher
> >
> > If we don't hear from a team at all in the next two weeks, we will assume
> > that the corresponding Tempest plugin / bunch of tests is not in use
> > anymore, and ignore it. If you use tempest.scenario.manager.py today and
> > your repo is not on the list, please let us know!
> >
> > I'm happy to propose an initial patch for any team that may require it -
> > just ping me on IRC (andreaf).
> > I won't have the bandwidth myself to babysit each patch through review
> and
> > gate though.
> >
> > Thank you for your cooperation and patience!
> >
> > Andrea
> >
> > [0]
> >
> http://git.openstack.org/cgit/openstack/tempest/tree/tempest/scenario/manager.py
> > [1] 

Re: [openstack-dev] [nova] Should we delete the (unexposed) os-pci API?

2017-03-17 Thread Sean Dague
On 03/17/2017 04:19 PM, Matt Riedemann wrote:
> I was working on writing a spec for a blueprint [1] that would have
> touched on the os-pci API [2] and got to documenting about how it's not
> even documented [3] when Alex pointed out that the API is not even
> enabled [4][5].
> 
> It turns out that the os-pci API was added in the Nova V3 API and pulled
> back out, and [5] was a tracking bug to add it back in with a
> microversion, and that never happened.
> 
> Given the ugliness described in [3], and that I think our views on
> exposing this type of information have changed [6] since it was
> originally added, I'm proposing that we just delete the API code.
> 
> The API code itself was added back in Icehouse [7].
> 
> I tend to think if someone cared about needing this information in the
> REST API, they would have asked for it by now. As it stands, it's just
> technical debt and even if we did expose it, there are existing issues
> in the API, like the fact that the os-hypervisors extension just takes
> the compute_nodes.pci_stats dict and dumps it to json out of the REST
> API with no control over the keys in the response. That means if we ever
> change the fields in the PciDevicePool object, we implicitly introduce a
> backward incompatible change in the REST API.
> 
> So I move that we delete the (dead) code. Are there good reasons not to?
> 
> [1]
> https://blueprints.launchpad.net/nova/+spec/service-hyper-pci-uuid-in-api
> [2]
> https://github.com/openstack/nova/blob/15.0.0/nova/api/openstack/compute/pci.py
> 
> [3] https://bugs.launchpad.net/nova/+bug/1673869
> [4] https://github.com/openstack/nova/blob/15.0.0/setup.cfg#L132
> [5] https://bugs.launchpad.net/nova/+bug/1426241
> [6]
> https://docs.openstack.org/developer/nova/policies.html?highlight=metrics#metrics-gathering
> 
> [7] https://review.openstack.org/#/c/51135/
> 

Yes... with fire.

Realistically this was about the pinnacle of the extensions on
extensions API changes, which is why we didn't even let it into v2 in
the first place.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [dib] diskimage-builder v2 RC1 release; request for test

2017-03-17 Thread Andre Florath
Hello!

Thanks for the bug report. Can you please file this as a bug?

There is a very high probability that I introduced a change that
leads to the failure [1] - even if this is fixed now there is a
high probability that it will fail again when [2] is merged.

The reason is, that there are no test cases because there is no
nodepool CI job running on PPC. (Or do I miss something here?)

We are only a very few people at diskimage-builder with limited
resources and had to concentrate on the 'main-line' (i.e.: that
what can be tested by us). A discussion about what to support
or test was already started some time ago [3].

Looks that you are from IBM: would it be possible to provide
PPC hardware for testing and the man-power to integrate
this into the CI?
This would really help finding such problems during development
phase and would put me into the situation to be able to fix your
problem.

Kind regards

Andre

[1] https://review.openstack.org/#/c/375261/
[2] https://review.openstack.org/#/c/444586/
[3] https://review.openstack.org/#/c/418204/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Should we delete the (unexposed) os-pci API?

2017-03-17 Thread Matt Riedemann
I was working on writing a spec for a blueprint [1] that would have 
touched on the os-pci API [2] and got to documenting about how it's not 
even documented [3] when Alex pointed out that the API is not even 
enabled [4][5].


It turns out that the os-pci API was added in the Nova V3 API and pulled 
back out, and [5] was a tracking bug to add it back in with a 
microversion, and that never happened.


Given the ugliness described in [3], and that I think our views on 
exposing this type of information have changed [6] since it was 
originally added, I'm proposing that we just delete the API code.


The API code itself was added back in Icehouse [7].

I tend to think if someone cared about needing this information in the 
REST API, they would have asked for it by now. As it stands, it's just 
technical debt and even if we did expose it, there are existing issues 
in the API, like the fact that the os-hypervisors extension just takes 
the compute_nodes.pci_stats dict and dumps it to json out of the REST 
API with no control over the keys in the response. That means if we ever 
change the fields in the PciDevicePool object, we implicitly introduce a 
backward incompatible change in the REST API.


So I move that we delete the (dead) code. Are there good reasons not to?

[1] 
https://blueprints.launchpad.net/nova/+spec/service-hyper-pci-uuid-in-api
[2] 
https://github.com/openstack/nova/blob/15.0.0/nova/api/openstack/compute/pci.py

[3] https://bugs.launchpad.net/nova/+bug/1673869
[4] https://github.com/openstack/nova/blob/15.0.0/setup.cfg#L132
[5] https://bugs.launchpad.net/nova/+bug/1426241
[6] 
https://docs.openstack.org/developer/nova/policies.html?highlight=metrics#metrics-gathering

[7] https://review.openstack.org/#/c/51135/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ironic] Kubernetes-based long running processes

2017-03-17 Thread Fox, Kevin M
I've got a PS in progress for kolla-kubernetes to do keystone fernet token 
rolling too. Having a cluster aware cron job like function is really nice.

From: Taryma, Joanna [joanna.tar...@intel.com]
Sent: Friday, March 17, 2017 11:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][ironic] Kubernetes-based long running 
processes

Thank you for this explanation, Clint.
Kuberentes gets more and more popular and it would be great if we could also 
take advantage of it. There are already projects in Openstack that have a 
mission that aligns with task scheduling, like Mistral, that could possibly 
support Kubernetes as a backend and this solution could be adopted by other 
projects. I’d rather think about enriching an existing common project with k8s 
support, than starting from scratch.
I think it’s a good idea to gather cross-project use cases and expectation to 
come up with a solution that will be adoptable by all the projects that desire 
to use while still being generic.

WRT Swift use case – I don’t see what was listed there as excluded from 
Kubernetes usage, as K8S supports also 1 time jobs [0].

Joanna

[0] https://kubernetes.io/docs/concepts/jobs/run-to-completion-finite-workloads/

On 3/16/17, 11:15 AM, "Clint Byrum"  wrote:

Excerpts from Dean Troyer's message of 2017-03-16 12:19:36 -0500:
> On Wed, Mar 15, 2017 at 5:28 PM, Taryma, Joanna  
wrote:
> > I’m reaching out to you to ask if you’re aware of any other use cases 
that
> > could leverage such solution. If there’s a need for it in other 
project, it
> > may be a good idea to implement this in some sort of a common place.
>
> Before implementing something new it would be a good exercise to have
> a look at the other existing ways to run VMs and containers already in
> the OpenStack ecosystem.  Service VMs are a thing, and projects like
> Octavia are built around running inside the existing infrastructure.
> There are a bunch of deployment projects that are also designed
> specifically to run services with minimal base requirements.

The console access bit Joanna mentioned is special in that it needs to be
able to reach things like IPMI controllers. So that's not going to really
be able to run on a service VM easily. It's totally doable (I think we
could have achieved it with VTEP switches and OVN when I was playing
with that), but I can understand why a container solution running on
the same host as the conductor might be more desirable than service VMs.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Third Bi-Annual Community Contributor Awards

2017-03-17 Thread Kendall Nelson
Hello All!

As we approach the Boston Summit and Forum we also approach another round
of Community Contributor Awards (CCA's)! The nomination runs how through
April 23rd. So please nominate those you look to for guidance, those that
ask the challenging questions, those that don't get enough recognition for
their hard work or anyone else you think deserves a medal! You can nominate
more than one person as well :)

Winners will be announced publicly at the ceremony, but also notified
individually a week or so prior to the Summit.

Here is the nomination form:
https://openstackfoundation.formstack.com/forms/cca_nominations_boston

See you all in Boston!

-Kendall Nelson (diablo_rojo)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [dib] diskimage-builder v2 RC1 release; request for test

2017-03-17 Thread Mikhail Medvedev
On Thu, Feb 9, 2017 at 12:22 AM, Ian Wienand  wrote:
> Hi
>
> diskimage-builder has been working on a feature/v2 branch
> incorporating some largely internal changes to the way it finds and
> calls elements, enhancements to partitioning and removal of some
> long-deprecated elements.  We have just tagged 2.0.0rc1 and are
> requesting testing by interested parties.
>

I am too late, but it is a good place to mention that 2.0.0 introduced
a regression that ends up breaking image build on ppc64 platform
somehwere within bootloader element.

Error trace:

2017-03-17 15:54:37,317 INFO nodepool.image.build.devstack-xenial: +
[[ ppc64el =~ ppc ]]
2017-03-17 15:54:37,317 INFO nodepool.image.build.devstack-xenial: +
/usr/sbin/grub-install --modules=part_msdos --force /dev/loop0
--no-nvram
2017-03-17 15:54:37,369 INFO nodepool.image.build.devstack-xenial:
Installing for powerpc-ieee1275 platform.
2017-03-17 15:54:45,912 INFO nodepool.image.build.devstack-xenial:
/usr/sbin/grub-install: warning: unknown device type loop0p1
2017-03-17 15:54:45,913 INFO nodepool.image.build.devstack-xenial: .
2017-03-17 15:54:45,987 INFO nodepool.image.build.devstack-xenial:
/usr/sbin/grub-install: error: the chosen partition is not a PReP
partition.

On 1.2.8, the cmdline was ` /usr/sbin/grub-install
--modules=part_msdos --force /dev/loop0p1 --no-nvram`. I do not know
yet why grub-install thinks it should still use loop0p1 on 2.1.0.

I have not dug too deep into it, ended up downgrading
diskimage-builder to 1.2.8. Simple downgrade would not work, because
1.2.8 would now fail with dib-run-parts binary missing. So I also had
to copy dib-run-parts from 2.1.0. Maybe installing from source and
pinning 1.2.8 would have been a better strategy.

---
Mikhail Medvedev (mmedvede)
IBM

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ironic] Kubernetes-based long running processes

2017-03-17 Thread Taryma, Joanna
Thank you for this explanation, Clint.
Kuberentes gets more and more popular and it would be great if we could also 
take advantage of it. There are already projects in Openstack that have a 
mission that aligns with task scheduling, like Mistral, that could possibly 
support Kubernetes as a backend and this solution could be adopted by other 
projects. I’d rather think about enriching an existing common project with k8s 
support, than starting from scratch.
I think it’s a good idea to gather cross-project use cases and expectation to 
come up with a solution that will be adoptable by all the projects that desire 
to use while still being generic.

WRT Swift use case – I don’t see what was listed there as excluded from 
Kubernetes usage, as K8S supports also 1 time jobs [0]. 

Joanna

[0] https://kubernetes.io/docs/concepts/jobs/run-to-completion-finite-workloads/

On 3/16/17, 11:15 AM, "Clint Byrum"  wrote:

Excerpts from Dean Troyer's message of 2017-03-16 12:19:36 -0500:
> On Wed, Mar 15, 2017 at 5:28 PM, Taryma, Joanna  
wrote:
> > I’m reaching out to you to ask if you’re aware of any other use cases 
that
> > could leverage such solution. If there’s a need for it in other 
project, it
> > may be a good idea to implement this in some sort of a common place.
> 
> Before implementing something new it would be a good exercise to have
> a look at the other existing ways to run VMs and containers already in
> the OpenStack ecosystem.  Service VMs are a thing, and projects like
> Octavia are built around running inside the existing infrastructure.
> There are a bunch of deployment projects that are also designed
> specifically to run services with minimal base requirements.

The console access bit Joanna mentioned is special in that it needs to be
able to reach things like IPMI controllers. So that's not going to really
be able to run on a service VM easily. It's totally doable (I think we
could have achieved it with VTEP switches and OVN when I was playing
with that), but I can understand why a container solution running on
the same host as the conductor might be more desirable than service VMs.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][blazar][ceilometer][congress][intel-nfv-ci-tests][ironic][manila][networking-bgpvpn][networking-fortinet][networking-sfc][neutron][neutron-fwaas][neutron-lbaas][nova-lxd][octa

2017-03-17 Thread Sarabia, Solio
Hi.
We’ve completed integrating manager.py into openstack/ironic.
https://review.openstack.org/#/c/439252/ (include local copy)
https://review.openstack.org/#/c/446844/ (prune local copy)

-Solio

From: Andrea Frittoli 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, March 7, 2017 at 3:28 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] 
[QA][blazar][ceilometer][congress][intel-nfv-ci-tests][ironic][manila][networking-bgpvpn][networking-fortinet][networking-sfc][neutron][neutron-fwaas][neutron-lbaas][nova-lxd][octavia][sahara][tap-as-a-service][horizon][vmware-nsx][...

Hi,

an update on this.

It's about 10days since the original message, and the current status is:
- 3 patches merged, 1 approved (recheck)
- 5 patches submitted, pending approval
- 2 patches with a -1 (need more work)
- 7 patches submitted by me today (draft) - review needed

Thank you for your work on this!

I would recommend to prune the imported module as much as possible as well.
It would make it easier for the QA team to identify which interfaces on Tempest 
side should be migrated to stable.

andrea

On Wed, Mar 1, 2017 at 1:25 PM Andrea Frittoli 
> wrote:
On Wed, Mar 1, 2017 at 2:21 AM Takashi Yamamoto 
> wrote:
hi,

On Mon, Feb 27, 2017 at 8:34 PM, Andrea Frittoli
> wrote:
> Hello folks,
>
> TL;DR: if today you import manager,py from tempest.scenario please maintain
> a copy of [0] in tree until further notice.
>
> Full message:
> --
>
> One of the priorities for the QA team in the Pike cycle is to refactor
> scenario tests to a sane code base [1].
>
> As they are now, changes to scenario tests are difficult to develop and
> review, and failures in those tests are hard to debug, which is in many
> directions far away from where we need to be.
>
> The issue we face is that, even though tempest.scenario.manager is not
> advertised as a stable interface in Tempest, many project use it today for
> convenience in writing their own tests. We don't know about dependencies
> outside of the OpenStack ecosystem, but we want to try to make this refactor
> a smooth experience for our uses in OpenStack, and avoid painful gate
> breakages as much as possible.
>
> The process we're proposing is as follows:
> - hold a copy of [0] in tree - in most cases you won't even have to change
> your imports as a lot of projects use tempest/scenario in their code base.
> You may decide to include the bare minimum you need from that module instead
> of all of it. It's a bit more work to make the patch, but less un-used code
> lying around afterwards.

i submitted patches for a few repos.
https://review.openstack.org/#/q/status:open++branch:master+topic:tempest-manager
i'd suggest to use the same gerrit topic for relevant patches.
Thank you for looking into this!
Having a common gerrit topic is a nice idea: "tempest-manager"

I'm also tracking patches in this etherpad: 
https://etherpad.openstack.org/p/tempest-manager-plugins

andrea

> - the QA team will refactor scenario tests, and make more interfaces stable
> (test.py, credential providers). We won't advertise every single change in
> this process, only when we start and once we're done.
> - you may decide to discard your local copy of manager.py and consume
> Tempest stable interfaces directly. We will help with any question you may
> have on the process and on Tempest interfaces.
>
> Repositories affected by the refactor are (based on [2]):
>
> blazar,ceilometer,congress,intel-nfv-ci-tests,ironic,manila,networking-bgpvpn,networking-fortinet,networking-sfc,neutron-fwaas,neutron-lbaas,nova-lxd,octavia,sahara-tests,tap-as-a-service,tempest-horizon,vmware-nsx,watcher
>
> If we don't hear from a team at all in the next two weeks, we will assume
> that the corresponding Tempest plugin / bunch of tests is not in use
> anymore, and ignore it. If you use 
> tempest.scenario.manager.py today and
> your repo is not on the list, please let us know!
>
> I'm happy to propose an initial patch for any team that may require it -
> just ping me on IRC (andreaf).
> I won't have the bandwidth myself to babysit each patch through review and
> gate though.
>
> Thank you for your cooperation and patience!
>
> Andrea
>
> [0]
> http://git.openstack.org/cgit/openstack/tempest/tree/tempest/scenario/manager.py
> [1] https://etherpad.openstack.org/p/pike-qa-priorities
> [2]
> https://github.com/andreafrittoli/tempest_stable_interfaces/blob/master/data/get_deps.sh
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> 

Re: [openstack-dev] [kolla][kolla-kubernetes][helm] global value inconsistent cause install result unpredicatable

2017-03-17 Thread Fox, Kevin M
Commented on the bug.

From: Hung Chung Chih [lyanc...@gmail.com]
Sent: Thursday, March 16, 2017 11:35 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [kolla][kolla-kubernetes][helm] global value 
inconsistent cause install result unpredicatable

Hi all,
This is my first time to send mailing list, if I had made mistake please tell 
me, thank you.

When I try to helm install keystone service, I found it can installed succeeded 
but sometimes will failed with unexpected nil value for field 
spec.externalIPs[0]. Then I report bug at launchpad [1].

Then I found this issue was occurred because of there is one variable which 
exist at different micro service. For example, the 
global.kolla.keystone.all.port_external variable at keystone-itnernal-svc and 
keystone-public svc, one is true and the other is false. Before helm install 
package, it will catch all chart’s values inside package. However later char’s 
value will overwrite previous chart’s value. This will cause the final value 
will become unpredictable. This issue will not happened in normal helm use 
case. Because of usually subchart will have its own values and helm’s global 
value don’t support nesting yet [2].

In my view, it is a critical issue. Because of the install result will possibly 
be inconsistent between different install. I just suggest maybe me can add one 
test validate, which will validate whether there is any one variable that is 
conflict at different micro services in all_values.yaml

[1] https://bugs.launchpad.net/kolla-kubernetes/+bug/1670979
[2] https://github.com/kubernetes/helm/blob/master/docs/charts.md
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [lbaas][neutron] Is possible the lbaas commands are missing completely from openstack cli ?

2017-03-17 Thread Dean Troyer
On Fri, Mar 17, 2017 at 8:20 AM, Saverio Proto  wrote:
> Is LBaaS even going to be implemented in the unified openstack client ?

No, it will be implemented as a separate OSC plugin.  This is true for
all of Neutron's "advanced services" projects.

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] notification update week 11

2017-03-17 Thread Balazs Gibizer

Hi,

Here is the status update / focus setting mail about notification work
for week 11.

Bugs

[Medium] https://bugs.launchpad.net/nova/+bug/1657428 The instance
notifications are sent with inconsistent timestamp format. Fix is ready
for the cores to review https://review.openstack.org/#/c/421981

[Undecided] https://bugs.launchpad.net/nova/+bug/1673375 ValueError: 
Circular reference detected" in send_notification. Fix proposed 
https://review.openstack.org/#/c/446948/ .


Versioned notification transformation
-
No new goal setting, let's try to focus on to finish the goals of the 
last to weeks


Patches that just need a second core:
* https://review.openstack.org/#/c/382959 Transform instance.reboot
notification
* https://review.openstack.org/#/c/411791/ Transform
instance.reboot.error notification
* https://review.openstack.org/#/c/401992/ Transform
instance.volume_attach notification
* https://review.openstack.org/#/c/408676/ Transform
instance.volume_detach notification


Searchlight integration
---
changing Searchlight to use versioned notifications
~~~
https://blueprints.launchpad.net/searchlight/+spec/nova-versioned-notifications
bp is a hard dependency for the integration work.
Searchlight team promised to provide a list of notifications needed to 
be transformed so that they can switch to the versioned interface. We 
will prioritize the transformation work on the nova side accordingly.



bp additional-notification-fields-for-searchlight
~
Patches needs review:
https://review.openstack.org/#/q/label:Code-Review%253E%253D1+status:open+branch:master+topic:bp/additional-notification-fields-for-searchlight

The BlockDeviceMapping addition to the InstancePayload has been 
discussed on the ML and on the weekly meeting. Implementation is onging.


Other items
---
Short circuit notification payload generation
~
New oslo messaging has been released with the needed addition and 
global requirements has been bumped.
Work needs to be continued on the nova patch 
https://review.openstack.org/#/c/428260/



Weekly meeting
--
The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC
on openstack-meeting-4 so the next meeting will be held on 21th of
March
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170321T17

Cheers,
gibi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-17 Thread Zane Bitter

On 13/03/17 17:20, Davanum Srinivas wrote:

Zane,

Sorry for the top post, Can you please submit a TC resolution? I can
help with it as well. Let's test the waters.


OK, here is a start:

https://review.openstack.org/#/c/447031/

- ZB


Thanks,
Dims

On Mon, Mar 13, 2017 at 5:10 PM, Zane Bitter  wrote:

On 12/03/17 11:30, Clint Byrum wrote:


Excerpts from Fox, Kevin M's message of 2017-03-11 21:31:40 +:


No, they are treated as second class citizens. Take Trova again as an
example. The underlying OpenStack infrastructure does not provide a good
security solution for Trove's use case. As its more then just IaaS. So they
have spent years trying to work around it on one way or another, each with
horrible trade offs.

For example they could fix an issue by:
1. Run the service vm in the users tenant where it belongs. Though,
currently the user has permissions to reboot the vm, login through the
console and swipe any secrets that are on the vm and make it much harder for
the cloud admin to administer.
2. Run the vm in a "trove" tenant. This fixes the security issue but
breaks the quota model of OpenStack. Users with special host aggregate
access/flavors can't work with this model.

For our site, we can't use Trove at all at the moment, even though we
want to. Because option 2 doesn't work for us, and option 1 currently has a
glaring security flaw in it.

One of the ways I saw Trove try to fix it was to get a feature into Nova
called "Service VM's". VMs owned by the user but not fully controllable by
them but from some other OpenStack service on their behalf. This, IMO is the
right way to solve it. There are a lot of advanced services that need this
functionality. But it seems to have been rejected, as "users don't need
that"... Which is true, only if you only consider the IaaS use case.



You're right. This type of rejection is not O-K IMO, because this is
consumers of Nova with a real use case, asking for real features that
simply cannot be implemented anywhere except inside Nova. Perhaps the
climate has changed, and this effort can be resurrected.



I don't believe the climate has changed; there's no reason for it have. Nova
is still constrained by the size of the core reviewers team, and they've
been unwilling or unable to take steps (like splitting Nova up into smaller
chunks) that would increase capacity, so they have to reject as many feature
requests as possible. Given that the wider community has never had a debate
about what we're trying to build or for whom, it's perfectly easy to drift
along thinking that the current priorities are adequate without ever being
challenged.

Until we have a TC resolution - with the community consensus to back it up -
that says "the reason for having APIs to your infrastructure is so that
*applications* can use them and projects must make not being an obstacle to
this their highest purpose", or "we're building an open source AWS, not a
free VMWare", or https://www.youtube.com/watch?v=Vhh_GeBPOhs ... until it's
not possible to say with complete earnestness "OpenStack has VMs, so you can
run any application on it" then the climate will never change, and we'll
just keep hearing "I don't need this, so neither should you".


The problems of these other OpenStack services are being rejected as
second class problems, not primary ones.

I'm sure other sites are avoiding other OpenStack advanced services for
similar reasons. its not just that Operators don't want to deploy it, or
that Users are not asking for it.

Let me try and explain Zane's post in a sligtly different way... maybe
that would help...

So, say you had an operating system. It had the ability to run arbitrary
programs if the user started an executable via the keyboard/mouse. But had
no ability for an executable to start another executable. How useful would
that OS be? There would be no shell scripts. No non monolithic applications.
It would be sort of functional, but would be hamstrung.

OpenStack is like that today. Like the DOS operating system. Programs are
expected to be pretty self contained and not really talk back to the
Operating System its running on, nor a way to discover other programs
running on the same system. Nor really for a script running on the Operating
System to start other programs, chaining them together in a way thats more
useful then the sum of their parts. The current view is fine if you need is
just a container to run a traditional OS in. Its not if you are trying to
build an application that spans things.

There have been several attempts at fixing this, in Heat, in Murano, in
the App Catalog, but plumbing they rely on isn't really supportive of it, as
they believe the use case really is just launching a VM with an OS in it is
really the important thing to do, and the job's done.

For the Applications Catalog to be successful, it needs the underlying
cloud to have enough functionality among a common set of cloud provided
services to allow applications 

Re: [openstack-dev] [QA][gate][all] dsvm gate stability and scenario tests

2017-03-17 Thread Morales, Victor
Well my crazy idea was the addition[10] of an extra argument(—men-trace) on the 
pbr binary creation.  The idea is to be able to use it from any openstack 
binary and print those methods that are differences in the memory 
consumption[11].

Regards/Saludos
Victor Morales
irc: electrocucaracha

[10] https://review.openstack.org/#/c/433947/
[11] http://paste.openstack.org/show/599087/



From:  Jordan Pittier 
Reply-To:  "OpenStack Development Mailing List (not for usage questions)" 

Date:  Friday, March 17, 2017 at 7:27 AM
To:  "OpenStack Development Mailing List (not for usage questions)" 

Subject:  Re: [openstack-dev] [QA][gate][all] dsvm gate stability and   
scenario tests


The patch that reduced the number of Tempest Scenarios we run in every job and 
also reduce the test run concurrency [0] was merged 13 days ago. Since, the 
situation (i.e the high number of false negative job results) has not improved 
significantly. We
 need to keep looking collectively at this.


There seems to be an agreement that we are hitting some memory limit. Several 
of our most frequent failures are memory related [1]. So we should either 
reduce our memory usage or ask for bigger VMs, with more than 8GB of RAM.


There was/is several attempts to reduce our memory usage, by reducing the Mysql 
memory consumption ([2] but quickly reverted [3]), reducing the number of 
Apache workers ([4], [5]), more apache2 tuning [6]. If you have any crazy idea 
to help in this regard,
 please help. This is high priority for the whole openstack project, because 
it's plaguing many projects.


We have some tools to investigate memory consumption, like some regular "dstat" 
output [7], a home-made memory tracker [8] and stackviz [9].


Best,

Jordan

[0]: https://review.openstack.org/#/c/439698/
[1]: http://status.openstack.org/elastic-recheck/gate.html
[2] : https://review.openstack.org/#/c/438668/
[3]: https://review.openstack.org/#/c/446196/
[4]: https://review.openstack.org/#/c/426264/
[5]: https://review.openstack.org/#/c/445910/
[6]: https://review.openstack.org/#/c/446741/
[7]: 
http://logs.openstack.org/96/446196/1/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/b5c362f/logs/dstat-csv_log.txt.gz
 

[8]: 
http://logs.openstack.org/96/446196/1/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/b5c362f/logs/screen-peakmem_tracker.txt.gz
 

[9] : 
http://logs.openstack.org/41/446741/1/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/fa4d2e6/logs/stackviz/#/stdin/timeline
 








On Sat, Mar 4, 2017 at 4:19 PM, Andrea Frittoli 
 wrote:

Quick update on this, the change is now merged, so we now have a smaller number 
of scenario tests running serially after the api test run.
We'll monitor gate stability for the next week or so and decide whether further 
actions are required.
Please keep categorizing failures via elastic recheck as usual.
thank you
andrea

On Fri, 3 Mar 2017, 8:02 a.m. Ghanshyam Mann,  wrote:


Thanks. +1. i added my list in ethercalc.

Left put scenario tests can be run on periodic and experimental job. IMO on 
both ( periodic and experimental) to monitor their status periodically as well 
as on particular patch if we need to. 

-gmann






On Fri, Mar 3, 2017 at 4:28 PM, Andrea Frittoli
 wrote:




Hello folks,

we discussed a lot since the PTG about issues with gate stability; we need a 
stable and reliable gate to ensure smooth progress in Pike.

One of the issues that stands out is that most of the times during test runs 
our test VMs are under heavy load.
This can be the common cause behind several failures we've seen in the gate, so 
we agreed during the QA meeting yesterday [0] that we're going to try reducing 
the load and see whether that improves stability.


Next steps are:

- select a subset of scenario tests to be executed in the gate, based on [1], 
and run them serially only 
- the patch for this is [2] and we will approve this by the end of the day
- we will monitor stability for a week - if needed we may reduce concurrency a 
bit on API tests as well, and identify "heavy" tests candidate for removal / 
refactor
- the QA team won't approve any new test (scenario or heavy resource consuming 
api) until gate stability is ensured 

Thanks for your patience and collaboration!

Andrea

---
irc: andreaf

[0] http://eavesdrop.openstack.org/meetings/qa/2017/qa.2017-03-02-17.00.txt
[1] https://ethercalc.openstack.org/nu56u2wrfb2b
[2] 

Re: [openstack-dev] [QA][gate][all] dsvm gate stability and scenario tests

2017-03-17 Thread Jordan Pittier
On Fri, Mar 17, 2017 at 3:11 PM, Sean Dague  wrote:

> On 03/17/2017 09:24 AM, Jordan Pittier wrote:
> >
> >
> > On Fri, Mar 17, 2017 at 1:58 PM, Sean Dague  > > wrote:
> >
> > On 03/17/2017 08:27 AM, Jordan Pittier wrote:
> > > The patch that reduced the number of Tempest Scenarios we run in
> every
> > > job and also reduce the test run concurrency [0] was merged 13
> days ago.
> > > Since, the situation (i.e the high number of false negative job
> results)
> > > has not improved significantly. We need to keep looking
> collectively at
> > > this.
> >
> > While the situation hasn't completely cleared out -
> > http://tinyurl.com/mdmdxlk - since we've merged this we've not seen
> that
> > job go over 25% failure rate in the gate, which it was regularly
> > crossing in the prior 2 week period. That does feel like progress. In
> > spot checking I we are also rarely failing in scenario tests now, but
> > the fails tend to end up inside heavy API tests running in parallel.
> >
> >
> > > There seems to be an agreement that we are hitting some memory
> limit.
> > > Several of our most frequent failures are memory related [1]. So we
> > > should either reduce our memory usage or ask for bigger VMs, with
> more
> > > than 8GB of RAM.
> > >
> > > There was/is several attempts to reduce our memory usage, by
> reducing
> > > the Mysql memory consumption ([2] but quickly reverted [3]),
> reducing
> > > the number of Apache workers ([4], [5]), more apache2 tuning [6].
> If you
> > > have any crazy idea to help in this regard, please help. This is
> high
> > > priority for the whole openstack project, because it's plaguing
> many
> > > projects.
> >
> > Interesting, I hadn't seen the revert. It is also curious that it was
> > largely limitted to the neutron-api test job. It's also notable that
> the
> > sort buffers seem to have been set to the minimum allowed limit of
> mysql
> > -
> > https://dev.mysql.com/doc/refman/5.6/en/innodb-parameters.
> html#sysvar_innodb_sort_buffer_size
> >  html#sysvar_innodb_sort_buffer_size>
> > - and is over an order of magnitude decrease from the existing
> default.
> >
> > I wonder about redoing the change with everything except it and
> seeing
> > how that impacts the neutron-api job.
> >
> > Yes, that would be great because mysql is by far our biggest memory
> > consumer so we should target this first.
>
> While it is the single biggest process, weighing in at 500 MB, the
> python services are really our biggest memory consumers. They are
> collectively far outweighing either mysql or rabbit, and are the reason
> that even with 64MB guests we're running out of memory. So we want to
> keep that under perspective.
>
Absolutely. I have https://review.openstack.org/#/c/446986/ in that vain.
And if someone wants to start the work of not running the several Swift
*auditor*, *updater*, *reaper*, *replicator* services, in case the Swift
Replication factor is set to 1, that's also a good memory saving.

>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] [neutron] Hooks for instance actions like creation, deletion

2017-03-17 Thread Matt Riedemann

On 3/17/2017 9:13 AM, Matt Riedemann wrote:


There is also dynamic vendordata v2 which was added in Newton:

https://docs.openstack.org/developer/nova/vendordata.html

We got feedback during the Pike PTG from some people, using hooks during
instance create, that the dynamic vendordata serves their needs now.

If vendordata or notifications do not serve your use case, we suggest
you explain your use case in the open so the community can try to see if
it's something that has already been solved or is something worth
upstreaming because it's a common problem shared by multiple deployments.



Here is the nova spec for vendordata v2:

https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/vendordata-reboot.html

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][gate][all] dsvm gate stability and scenario tests

2017-03-17 Thread Sean Dague
On 03/17/2017 09:24 AM, Jordan Pittier wrote:
> 
> 
> On Fri, Mar 17, 2017 at 1:58 PM, Sean Dague  > wrote:
> 
> On 03/17/2017 08:27 AM, Jordan Pittier wrote:
> > The patch that reduced the number of Tempest Scenarios we run in every
> > job and also reduce the test run concurrency [0] was merged 13 days ago.
> > Since, the situation (i.e the high number of false negative job results)
> > has not improved significantly. We need to keep looking collectively at
> > this.
> 
> While the situation hasn't completely cleared out -
> http://tinyurl.com/mdmdxlk - since we've merged this we've not seen that
> job go over 25% failure rate in the gate, which it was regularly
> crossing in the prior 2 week period. That does feel like progress. In
> spot checking I we are also rarely failing in scenario tests now, but
> the fails tend to end up inside heavy API tests running in parallel.
> 
> 
> > There seems to be an agreement that we are hitting some memory limit.
> > Several of our most frequent failures are memory related [1]. So we
> > should either reduce our memory usage or ask for bigger VMs, with more
> > than 8GB of RAM.
> >
> > There was/is several attempts to reduce our memory usage, by reducing
> > the Mysql memory consumption ([2] but quickly reverted [3]), reducing
> > the number of Apache workers ([4], [5]), more apache2 tuning [6]. If you
> > have any crazy idea to help in this regard, please help. This is high
> > priority for the whole openstack project, because it's plaguing many
> > projects.
> 
> Interesting, I hadn't seen the revert. It is also curious that it was
> largely limitted to the neutron-api test job. It's also notable that the
> sort buffers seem to have been set to the minimum allowed limit of mysql
> -
> 
> https://dev.mysql.com/doc/refman/5.6/en/innodb-parameters.html#sysvar_innodb_sort_buffer_size
> 
> 
> - and is over an order of magnitude decrease from the existing default.
> 
> I wonder about redoing the change with everything except it and seeing
> how that impacts the neutron-api job.
> 
> Yes, that would be great because mysql is by far our biggest memory
> consumer so we should target this first.

While it is the single biggest process, weighing in at 500 MB, the
python services are really our biggest memory consumers. They are
collectively far outweighing either mysql or rabbit, and are the reason
that even with 64MB guests we're running out of memory. So we want to
keep that under perspective.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][gate][all] dsvm gate stability and scenario tests

2017-03-17 Thread Sean Dague
On 03/17/2017 09:39 AM, Andrea Frittoli wrote:
> 
> 
> On Fri, Mar 17, 2017 at 1:03 PM Sean Dague  > wrote:
> 
> On 03/17/2017 08:27 AM, Jordan Pittier wrote:
> > The patch that reduced the number of Tempest Scenarios we run in every
> > job and also reduce the test run concurrency [0] was merged 13
> days ago.
> > Since, the situation (i.e the high number of false negative job
> results)
> > has not improved significantly. We need to keep looking
> collectively at
> > this.
> 
> While the situation hasn't completely cleared out -
> http://tinyurl.com/mdmdxlk - since we've merged this we've not seen that
> job go over 25% failure rate in the gate, which it was regularly
> crossing in the prior 2 week period. That does feel like progress. 
> 
>  
> I agree the situation improved a bit, but there are still too many failures.
> There is a peak of failures on Mar 12th in the graph, I remember looking
> at it briefly, as it was on a Sunday - and then by Monday it was back to
> normal. It's not clear yet to me what caused / fixed that peak. The mysql
> revert was merged on March 15th, which is too late to explain the change.

Well, also realize that graph doesn't have error bars on it, and is 12
hour rolling averaging. So if you end up with only 5 changes in 12 hours
going through, and 1 fails it's going to hit 20%. It's actually a
totally non ideal way to handle this data, but it's about as good as you
can get with graphite.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [deployment][tripleo] Next steps for configuration management in TripleO (and other tools?)

2017-03-17 Thread Emilien Macchi
[adding deployment tag because it might be interesting for other tools
since it would engage cross-project work]
If you're not working on TripleO, feel free to comment on how your
team plans to solve the problem (or don't), so we can maybe discuss
and work together.

Hey,

Things are moving in OpenStack deployment tools and we're working on
solving common challenges in a consistent way. We had different topics
and threads about key/value store for configuration management, and
I've been thinking how it could fit with TripleO roadmap:
https://etherpad.openstack.org/p/tripleo-etcd-transition

Feel free to bring any feedback in this thread or even better in the etherpad.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][gate][all] dsvm gate stability and scenario tests

2017-03-17 Thread Andrea Frittoli
On Fri, Mar 17, 2017 at 1:03 PM Sean Dague  wrote:

> On 03/17/2017 08:27 AM, Jordan Pittier wrote:
> > The patch that reduced the number of Tempest Scenarios we run in every
> > job and also reduce the test run concurrency [0] was merged 13 days ago.
> > Since, the situation (i.e the high number of false negative job results)
> > has not improved significantly. We need to keep looking collectively at
> > this.
>
> While the situation hasn't completely cleared out -
> http://tinyurl.com/mdmdxlk - since we've merged this we've not seen that
> job go over 25% failure rate in the gate, which it was regularly
> crossing in the prior 2 week period. That does feel like progress.


I agree the situation improved a bit, but there are still too many failures.
There is a peak of failures on Mar 12th in the graph, I remember looking
at it briefly, as it was on a Sunday - and then by Monday it was back to
normal. It's not clear yet to me what caused / fixed that peak. The mysql
revert was merged on March 15th, which is too late to explain the change.



> In
> spot checking I we are also rarely failing in scenario tests now, but
> the fails tend to end up inside heavy API tests running in parallel.
>
>
An ssh failure in volume scenario tests  is still on top of the recheck
queue, but looking at logstash I see it's mostly happening in
gate-tempest-dsvm-networking-odl-* jobs. The integration jobs seem to
be behaving for scenario tests.

> There seems to be an agreement that we are hitting some memory limit.
> > Several of our most frequent failures are memory related [1]. So we
> > should either reduce our memory usage or ask for bigger VMs, with more
> > than 8GB of RAM.
> >
> > There was/is several attempts to reduce our memory usage, by reducing
> > the Mysql memory consumption ([2] but quickly reverted [3]), reducing
> > the number of Apache workers ([4], [5]), more apache2 tuning [6]. If you
> > have any crazy idea to help in this regard, please help. This is high
> > priority for the whole openstack project, because it's plaguing many
> > projects.
>

I think it's very important to work on both sides: make sure our testing
does
not kill the SUT, but also keep the footprint of the SUT under control.
This may be a good topic of discussion for the forum in Boston.

On the testing side, I started working on using two jobs instead of one:
- one running all API tests, to a degree of parallelism that does not break
the SUT
- one running scenario tests, perhaps on a two nodes test environment

That would give us more space in terms of testing, but it would also mean
more
test nodes and more test jobs to track.
The scenario test job is defined, one d-g patch is missing to complete it
https://review.openstack.org/#/c/442565/.


> Interesting, I hadn't seen the revert. It is also curious that it was
> largely limitted to the neutron-api test job. It's also notable that the
> sort buffers seem to have been set to the minimum allowed limit of mysql
> -
>
> https://dev.mysql.com/doc/refman/5.6/en/innodb-parameters.html#sysvar_innodb_sort_buffer_size
> - and is over an order of magnitude decrease from the existing default.
>
> I wonder about redoing the change with everything except it and seeing
> how that impacts the neutron-api job.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [lbaas][neutron] Is possible the lbaas commands are missing completely from openstack cli ?

2017-03-17 Thread Dariusz Śmigiel
Hey Saverio,

2017-03-17 8:20 GMT-05:00 Saverio Proto :
> Client version: openstack 3.9.0
>
> I cant find any lbaas commands. I have to use the 'neutron' client.

It's highly possible, that LBaaS commands are not in OSC.

>
> Everycommand I get:
> neutron CLI is deprecated and will be removed in the future. Use
> openstack CLI instead.

For now, nothing's changed. Neutron CLI emits this information to
encourage users to use OSC. Although, it still supports existing
commands, but won't add new features.
If CLI will be changed, in backward incompatible way, new major
version will be released. Until then, you can use it.

>
> Is LBaaS even going to be implemented in the unified openstack client ?

Yes, Ankur Gupta is working on that [1].
It's initial version, so it can take some time to achieve feature
parity between Neutron CLI and OSC, but the work is happening.
Please remember, that even, when abovementioned change will be merged,
it still needs to be released in new OpenStack Client. So it won't be
immediately available.

[1] https://review.openstack.org/#/c/428414/

BR,
Dariusz

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] [horizon] Keystone-Horizon meeting on 2017-03-23 cancelled

2017-03-17 Thread Rob Cresswell
Hey everyone,

I'm away next week, so the Keystone-Horizon meeting on the 23rd is cancelled

Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][deployment] Deployment Working Group (DWG)

2017-03-17 Thread Emilien Macchi
On Thu, Mar 16, 2017 at 12:59 PM, Jeremy Stanley  wrote:
> On 2017-03-16 11:42:25 -0500 (-0500), Devdatta Kulkarni wrote:
> [...]
>> Related to the work that would be produced by the group, what is
>> the thought around where would deployment artifacts live? Within
>> each individual project's repository? As part of a single
>> deployment project?
> [...]
>
> The proposal is to identify a higher-level working group with
> participants from the current (and future) multitude of different
> development project teams to focus on ways in which they can
> standardize and cooperate with one another, not to combine them into
> one single new team replacing the current set of independent teams.
> The latter would just be going back to what we had in the days
> before the project structure reform of 2014 (a.k.a. "big tent").

Exactly.
And right now, we're tracking our hot topics here:
https://etherpad.openstack.org/p/deployment-pike

We haven't talked yet about the artifacts, right now we're focusing on
solving the same technical challenges in a consistent way (see
etherpad).
Don't hesitate to bring your input and use the mailing-list for new topics.

Thanks,

> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][gate][all] dsvm gate stability and scenario tests

2017-03-17 Thread Jordan Pittier
On Fri, Mar 17, 2017 at 1:58 PM, Sean Dague  wrote:

> On 03/17/2017 08:27 AM, Jordan Pittier wrote:
> > The patch that reduced the number of Tempest Scenarios we run in every
> > job and also reduce the test run concurrency [0] was merged 13 days ago.
> > Since, the situation (i.e the high number of false negative job results)
> > has not improved significantly. We need to keep looking collectively at
> > this.
>
> While the situation hasn't completely cleared out -
> http://tinyurl.com/mdmdxlk - since we've merged this we've not seen that
> job go over 25% failure rate in the gate, which it was regularly
> crossing in the prior 2 week period. That does feel like progress. In
> spot checking I we are also rarely failing in scenario tests now, but
> the fails tend to end up inside heavy API tests running in parallel.
>

> > There seems to be an agreement that we are hitting some memory limit.
> > Several of our most frequent failures are memory related [1]. So we
> > should either reduce our memory usage or ask for bigger VMs, with more
> > than 8GB of RAM.
> >
> > There was/is several attempts to reduce our memory usage, by reducing
> > the Mysql memory consumption ([2] but quickly reverted [3]), reducing
> > the number of Apache workers ([4], [5]), more apache2 tuning [6]. If you
> > have any crazy idea to help in this regard, please help. This is high
> > priority for the whole openstack project, because it's plaguing many
> > projects.
>
> Interesting, I hadn't seen the revert. It is also curious that it was
> largely limitted to the neutron-api test job. It's also notable that the
> sort buffers seem to have been set to the minimum allowed limit of mysql
> -
> https://dev.mysql.com/doc/refman/5.6/en/innodb-
> parameters.html#sysvar_innodb_sort_buffer_size
> - and is over an order of magnitude decrease from the existing default.
>
> I wonder about redoing the change with everything except it and seeing
> how that impacts the neutron-api job.
>
Yes, that would be great because mysql is by far our biggest memory
consumer so we should target this first.

>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [lbaas][neutron] Is possible the lbaas commands are missing completely from openstack cli ?

2017-03-17 Thread Saverio Proto
Client version: openstack 3.9.0

I cant find any lbaas commands. I have to use the 'neutron' client.

Everycommand I get:
neutron CLI is deprecated and will be removed in the future. Use
openstack CLI instead.

Is LBaaS even going to be implemented in the unified openstack client ?

thank you

Saverio

-- 
SWITCH
Saverio Proto, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 1573
saverio.pr...@switch.ch, http://www.switch.ch

http://www.switch.ch/stories

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][gate][all] dsvm gate stability and scenario tests

2017-03-17 Thread Sean Dague
On 03/17/2017 08:45 AM, Ihar Hrachyshka wrote:
> I had some patches to collect more stats about mlocks
> here: https://review.openstack.org/#/q/topic:collect-mlock-stats-in-gate
> but they need reviews.

Done, sorry I hadn't seen those earlier.

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][gate][all] dsvm gate stability and scenario tests

2017-03-17 Thread Sean Dague
On 03/17/2017 08:27 AM, Jordan Pittier wrote:
> The patch that reduced the number of Tempest Scenarios we run in every
> job and also reduce the test run concurrency [0] was merged 13 days ago.
> Since, the situation (i.e the high number of false negative job results)
> has not improved significantly. We need to keep looking collectively at
> this.

While the situation hasn't completely cleared out -
http://tinyurl.com/mdmdxlk - since we've merged this we've not seen that
job go over 25% failure rate in the gate, which it was regularly
crossing in the prior 2 week period. That does feel like progress. In
spot checking I we are also rarely failing in scenario tests now, but
the fails tend to end up inside heavy API tests running in parallel.

> There seems to be an agreement that we are hitting some memory limit.
> Several of our most frequent failures are memory related [1]. So we
> should either reduce our memory usage or ask for bigger VMs, with more
> than 8GB of RAM.
> 
> There was/is several attempts to reduce our memory usage, by reducing
> the Mysql memory consumption ([2] but quickly reverted [3]), reducing
> the number of Apache workers ([4], [5]), more apache2 tuning [6]. If you
> have any crazy idea to help in this regard, please help. This is high
> priority for the whole openstack project, because it's plaguing many
> projects.

Interesting, I hadn't seen the revert. It is also curious that it was
largely limitted to the neutron-api test job. It's also notable that the
sort buffers seem to have been set to the minimum allowed limit of mysql
-
https://dev.mysql.com/doc/refman/5.6/en/innodb-parameters.html#sysvar_innodb_sort_buffer_size
- and is over an order of magnitude decrease from the existing default.

I wonder about redoing the change with everything except it and seeing
how that impacts the neutron-api job.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement/resource providers update 15

2017-03-17 Thread Chris Dent


This week in resource providers and placement the main news is we've
made a change in priorities, see "What's Changed" below. If this
update seems more terse or more riddled with errors than usual, I
blame the great scheduler in the sky placing a virus upon me that is
messing with my savoir faire.

# What Matters Most

Traits remain the top of the priority stack. Links below in themes.

# What's Changed

A fair bit.

Currently the CachingScheduler does not make use of the placement
API, only the FilterScheduler does. Thus from the standpoint of a
CachingScheduler-using deployment the placement API isn't providing
much in the way of visible value yet. It was assumed that not many
people are using the CachingScheduler but apparently this is not the
case. We would like to get rid of the CachingScheduler for the sake
of both cellsv2 and placement, but we cannot do this until we get
claims in the scheduler (and against the placement api).

Therefore the priorities are being adjusted so that claims are above
shared resoures and nested resource providers, but below traits and
ironic inventory. See this message and the surrounding thread for
more details:

http://lists.openstack.org/pipermail/openstack-dev/2017-March/113839.html

In that same thread we also decided that we will continue to forego
creating a full-blown python-placementclient and instead will make
an openstackclient plugin for CLI operations against the placement
API:

http://lists.openstack.org/pipermail/openstack-dev/2017-March/113932.html

And again, in the same thread, we declared that we should begin the
work to extract placement to its own repo shortly after claims in
the scheduler has stabilized (the hoped for target is early Queens).
This provides a good platform on which to manage backports in case
we break stuff.

There are two new utility methods which make some placement
framework things a bit simpler:

* https://review.openstack.org/#/c/97/
  Adds a `raise_http_status_code_if_not_version` method which is
  handy when adding functionality in a new microversion.

* webob.dec.wsgify replaced by wsgi_wrapper.PlacementWsgify
  Done in https://review.openstack.org/#/c/395194/ a while back but
  perhaps not socialized enough. This makes it so the
  json_error_formatter keyword doesn't need to be used everywhere a
  webob.exc is raised.

# Main Themes

## Traits

The work to implement the traits API in placement is happening at


https://review.openstack.org/#/q/status:open+topic:bp/resource-provider-traits

## Ironic/Custom Resource Classes

The work to support custom resource classes provided by the Ironic
virt driver has merged

https://review.openstack.org/#/c/441544/

There are test improvements on the same BP


https://review.openstack.org/#/q/status:open+topic:bp/custom-resource-classes-pike

They are part of a drive to improve the functional testing of the
compute side of the interactions with the placement API to make sure
we fully understand and have test coverage for the flow of events.

jroll has started a spec for "custom resource classes in flavors"
that describes the stuff that will actually make use of this
functionality:

https://review.openstack.org/#/c/446570/

Over in ironic some functional and integration tests have started:

https://review.openstack.org/#/c/443628/

## Claims in the Scheduler

A "Super WIP" spec for claims in the scheduler was started at

 https://review.openstack.org/#/c/437424/

Since we've bumped this up the priority chain we will need to give
these concepts some real attention. The WIP spec has a lot of how
and not enough what or why.

## Shared Resource Providers

https://blueprints.launchpad.net/nova/+spec/shared-resources-pike

Progress on this will continue once traits and claims have moved forward.

## Nested Resource Providers

https://review.openstack.org/#/q/status:open+topic:bp/nested-resource-providers

According to mriedem, the nested-resource provider spec
 should be updated to
reflect the flipboard discussion from the PTG (reported in this
space last week) about multi-parent providers and how they aren't
going to happen. Previous email:

 http://lists.openstack.org/pipermail/openstack-dev/2017-March/113332.html

## Docs

https://review.openstack.org/#/q/topic:cd/placement-api-ref

The start of creating an API ref for the placement API. Not a lot
there yet as I haven't had much of an opportunity to move it along.
There is, however, enough there for additional content to be
started, if people have the opportunity to do so. Check with me to
divvy up the work if you'd like to contribute.

## Performance

We're aware that there are some redundancies in the resource tracker
that we'd like to clean up

  
http://lists.openstack.org/pipermail/openstack-dev/2017-January/110953.html

but it's also the case that we've done no performance testing on the
placement service itself.

We ought to do some testing to 

Re: [openstack-dev] [QA][gate][all] dsvm gate stability and scenario tests

2017-03-17 Thread Ihar Hrachyshka
I had some patches to collect more stats about mlocks here:
https://review.openstack.org/#/q/topic:collect-mlock-stats-in-gate but they
need reviews.

Ihar

On Fri, Mar 17, 2017 at 5:28 AM Jordan Pittier 
wrote:

> The patch that reduced the number of Tempest Scenarios we run in every job
> and also reduce the test run concurrency [0] was merged 13 days ago. Since,
> the situation (i.e the high number of false negative job results) has not
> improved significantly. We need to keep looking collectively at this.
>
> There seems to be an agreement that we are hitting some memory limit.
> Several of our most frequent failures are memory related [1]. So we should
> either reduce our memory usage or ask for bigger VMs, with more than 8GB of
> RAM.
>
> There was/is several attempts to reduce our memory usage, by reducing the
> Mysql memory consumption ([2] but quickly reverted [3]), reducing the
> number of Apache workers ([4], [5]), more apache2 tuning [6]. If you have
> any crazy idea to help in this regard, please help. This is high priority
> for the whole openstack project, because it's plaguing many projects.
>
> We have some tools to investigate memory consumption, like some regular
> "dstat" output [7], a home-made memory tracker [8] and stackviz [9].
>
> Best,
> Jordan
>
> [0]: https://review.openstack.org/#/c/439698/
> [1]: http://status.openstack.org/elastic-recheck/gate.html
> [2] : https://review.openstack.org/#/c/438668/
> [3]: https://review.openstack.org/#/c/446196/
> [4]: https://review.openstack.org/#/c/426264/
> [5]: https://review.openstack.org/#/c/445910/
> [6]: https://review.openstack.org/#/c/446741/
> [7]:
> http://logs.openstack.org/96/446196/1/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/b5c362f/logs/dstat-csv_log.txt.gz
> [8]:
> http://logs.openstack.org/96/446196/1/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/b5c362f/logs/screen-peakmem_tracker.txt.gz
> [9] :
> http://logs.openstack.org/41/446741/1/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/fa4d2e6/logs/stackviz/#/stdin/timeline
>
> On Sat, Mar 4, 2017 at 4:19 PM, Andrea Frittoli  > wrote:
>
> Quick update on this, the change is now merged, so we now have a smaller
> number of scenario tests running serially after the api test run.
>
> We'll monitor gate stability for the next week or so and decide whether
> further actions are required.
>
> Please keep categorizing failures via elastic recheck as usual.
>
> thank you
>
> andrea
>
> On Fri, 3 Mar 2017, 8:02 a.m. Ghanshyam Mann, 
> wrote:
>
> Thanks. +1. i added my list in ethercalc.
>
> Left put scenario tests can be run on periodic and experimental job. IMO
> on both ( periodic and experimental) to monitor their status periodically
> as well as on particular patch if we need to.
>
> -gmann
>
> On Fri, Mar 3, 2017 at 4:28 PM, Andrea Frittoli  > wrote:
>
> Hello folks,
>
> we discussed a lot since the PTG about issues with gate stability; we need
> a stable and reliable gate to ensure smooth progress in Pike.
>
> One of the issues that stands out is that most of the times during test
> runs our test VMs are under heavy load.
> This can be the common cause behind several failures we've seen in the
> gate, so we agreed during the QA meeting yesterday [0] that we're going to
> try reducing the load and see whether that improves stability.
>
> Next steps are:
> - select a subset of scenario tests to be executed in the gate, based on
> [1], and run them serially only
> - the patch for this is [2] and we will approve this by the end of the day
> - we will monitor stability for a week - if needed we may reduce
> concurrency a bit on API tests as well, and identify "heavy" tests
> candidate for removal / refactor
> - the QA team won't approve any new test (scenario or heavy resource
> consuming api) until gate stability is ensured
>
> Thanks for your patience and collaboration!
>
> Andrea
>
> ---
> irc: andreaf
>
> [0]
> http://eavesdrop.openstack.org/meetings/qa/2017/qa.2017-03-02-17.00.txt
> [1] https://ethercalc.openstack.org/nu56u2wrfb2b
> [2] https://review.openstack.org/#/c/439698/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] [QA][gate][all] dsvm gate stability and scenario tests

2017-03-17 Thread Jordan Pittier
The patch that reduced the number of Tempest Scenarios we run in every job
and also reduce the test run concurrency [0] was merged 13 days ago. Since,
the situation (i.e the high number of false negative job results) has not
improved significantly. We need to keep looking collectively at this.

There seems to be an agreement that we are hitting some memory limit.
Several of our most frequent failures are memory related [1]. So we should
either reduce our memory usage or ask for bigger VMs, with more than 8GB of
RAM.

There was/is several attempts to reduce our memory usage, by reducing the
Mysql memory consumption ([2] but quickly reverted [3]), reducing the
number of Apache workers ([4], [5]), more apache2 tuning [6]. If you have
any crazy idea to help in this regard, please help. This is high priority
for the whole openstack project, because it's plaguing many projects.

We have some tools to investigate memory consumption, like some regular
"dstat" output [7], a home-made memory tracker [8] and stackviz [9].

Best,
Jordan

[0]: https://review.openstack.org/#/c/439698/
[1]: http://status.openstack.org/elastic-recheck/gate.html
[2] : https://review.openstack.org/#/c/438668/
[3]: https://review.openstack.org/#/c/446196/
[4]: https://review.openstack.org/#/c/426264/
[5]: https://review.openstack.org/#/c/445910/
[6]: https://review.openstack.org/#/c/446741/
[7]:
http://logs.openstack.org/96/446196/1/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/b5c362f/logs/dstat-csv_log.txt.gz
[8]:
http://logs.openstack.org/96/446196/1/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/b5c362f/logs/screen-peakmem_tracker.txt.gz
[9] :
http://logs.openstack.org/41/446741/1/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/fa4d2e6/logs/stackviz/#/stdin/timeline

On Sat, Mar 4, 2017 at 4:19 PM, Andrea Frittoli 
wrote:

> Quick update on this, the change is now merged, so we now have a smaller
> number of scenario tests running serially after the api test run.
>
> We'll monitor gate stability for the next week or so and decide whether
> further actions are required.
>
> Please keep categorizing failures via elastic recheck as usual.
>
> thank you
>
> andrea
>
> On Fri, 3 Mar 2017, 8:02 a.m. Ghanshyam Mann, 
> wrote:
>
>> Thanks. +1. i added my list in ethercalc.
>>
>> Left put scenario tests can be run on periodic and experimental job. IMO
>> on both ( periodic and experimental) to monitor their status periodically
>> as well as on particular patch if we need to.
>>
>> -gmann
>>
>> On Fri, Mar 3, 2017 at 4:28 PM, Andrea Frittoli <
>> andrea.fritt...@gmail.com> wrote:
>>
>> Hello folks,
>>
>> we discussed a lot since the PTG about issues with gate stability; we
>> need a stable and reliable gate to ensure smooth progress in Pike.
>>
>> One of the issues that stands out is that most of the times during test
>> runs our test VMs are under heavy load.
>> This can be the common cause behind several failures we've seen in the
>> gate, so we agreed during the QA meeting yesterday [0] that we're going to
>> try reducing the load and see whether that improves stability.
>>
>> Next steps are:
>> - select a subset of scenario tests to be executed in the gate, based on
>> [1], and run them serially only
>> - the patch for this is [2] and we will approve this by the end of the day
>> - we will monitor stability for a week - if needed we may reduce
>> concurrency a bit on API tests as well, and identify "heavy" tests
>> candidate for removal / refactor
>> - the QA team won't approve any new test (scenario or heavy resource
>> consuming api) until gate stability is ensured
>>
>> Thanks for your patience and collaboration!
>>
>> Andrea
>>
>> ---
>> irc: andreaf
>>
>> [0] http://eavesdrop.openstack.org/meetings/qa/
>> 2017/qa.2017-03-02-17.00.txt
>> [1] https://ethercalc.openstack.org/nu56u2wrfb2b
>> [2] https://review.openstack.org/#/c/439698/
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [keystone][all] Reseller - do we need it?

2017-03-17 Thread Henry Nash
Yes, that was indeed the model originally proposed (some referred to it as 
“nested domains”). Back then we didn’t have project hierarchy support in 
keystone (actually the two requirements emerged together and intertwined - and 
for a while there was a joint spec). Today, there are some interesting 
characteristics in keystone:

1) Project hierarchy support
2) Domains are actually projects under-the-hood, with a special attribute 
(is_project == true).
3) Hence domains are already part of the hierarchy - they just are only allowed 
to be the root of a tree.
4) In fact, if we really want to get technical, in keystone the project 
representing a domain does actually have a parent (the hidden “root of all 
domains” which we don’t expose at the API level)

So from the above, once can see that allowing more than one layer of domains at 
the top of the tree would be (implantation wise) relative easy.  Although this 
has traditionally been my preferred solution, just ‘cause it is alluring and 
seems easy, doesn’t mean it is necessarily the right solution.

The most common alternative proposed is to use some kind of federation. The 
most likely scenario would be that the relationship between the cloud owner and 
a reseller would be a federated one, while the relationship between a reseller 
and their customers would be a traditional one of each customer having a 
domain. Some of the challenges to this approach would be:

a) How do we continually sync the catalogs? Presumably you would want all the 
endpoints (except keystone) to be the same in each catalog? 
b) What are the problems with having the same endpoint registered in multiple 
catalogs? How would keystone middleware on a common, say, nova endpoint know 
which keystone to validate with?
c) How to stop “admin” from one keystone from being treated as general “admin” 
on, say, a nova endpoint?
d) On-boarding a reseller would be a more manual process (i.e. you need to set 
up federation mappings etc.)

In some respects, you could argue that if I were a reseller, I would like this 
federated model better. I know, for sure, that nobody outside of my VCO can get 
access (i.e. since I have my own keystone, and token obtained from a different 
reseller’s keystone has no chance of getting in). However, I don’t believe we 
have every explored how to solve the various issues above.

Henry

> On 17 Mar 2017, at 10:38, Adrian Turjak  wrote:
> 
> This actually sounds a lot like a domain per reseller, with a sub-domain per 
> reseller customer. With the reseller themselves probably also running a 
> sub-domain or two for themselves. Mayb even running their own external 
> federated user system for that specific reseller domain.
> 
> That or something like it could be doable. The reseller would be aware of the 
> resources (likely to bill) and projects (since you would still likely bill 
> project or at least tag invoice items per project), so the hidden project 
> concept doesn't really fit.
> 
> One thing that I do think is useful, and we've done for our cloud, is letting 
> users see who exactly has access to their projects. For our Horizon we have a 
> custom identity/access control panel that shows clearly who has access on 
> your project and once I add on proper inheritance support will also list 
> users who has inherited access for the project you are currently scoped to. 
> This means a customer knows and can see when an admin has added themselves to 
> their project to help debug something. Plus it even helps them in general 
> manage their own user access better.
> 
> I know we've been looking at the reseller model ourselves but haven't really 
> gotten anywhere with it, partly because the margins didn't seem worth it, but 
> also because the complexity of shoe-horning it around our existing openstack 
> deployment. Domain per reseller (reseller as domain admin) and sub-domain per 
> reseller customer (as sub-domain admin) I'm interested in!
> 
> 
> 
> On 17 Mar. 2017 10:27 pm, Henry Nash  wrote:
> OK, so I worked on the original spec for this in Keystone, based around real 
> world requirements we (IBM) saw.  For the record, here’s the particular goal 
> we wanted to achieve:
> 
> 1) A cloud owner (i.e. the enterprise owns and maintains the core of the 
> cloud), wants to attract more traffic by using third-party resellers or 
> partners.
> 2) Those resellers will sell “cloud” to their own customers and be 
> responsible for finding & managing such customers. These resellers want to be 
> able to onboard such customers and “hand them the admin keys” to they 
> respective (conceptual) pieces of the cloud. I.e. Reseller X signs up 
> Customer Y. Customer Y gets “keystone admin” for their bit of the (shared) 
> cloud, and then can create and manage their own users without either the 
> Reseller or the Overall cloud owner being involved. In keystone terms, each 
> actual customer gets the equivalent of a domain, so 

Re: [openstack-dev] [neutron][lbaasv2] Migrate LBaaS instance

2017-03-17 Thread Saverio Proto
Hello there,

I am just back from the Ops Midcycle where Heidi Joy Tretheway reported
some data from the user survey.

So if we look at deployments with more than 100 servers NO ONE IS USING
NEWTON yet. And I scream this loud. Everyone is still in Liberty or Mitaka.

I am just struggling to upgrade to LBaaSv2 to hear that is already going
into deprecation. The feature Zhi is proposing is important also for me
once I go to production.

I would encourage devs to listen more to operators feedback. Also you
devs cant just ignore that users are running still Liberty/Mitaka so you
need to change something in this way of working or all the users will
run away.

thank you

Saverio


On 16/03/17 16:26, Kosnik, Lubosz wrote:
> Hello Zhi,
> Just one small information. Yesterday on Octavia weekly meeting we
> decided that we’re gonna add new features to LBaaSv2 till Pike-1 so the
> windows is very small.
> This decision was made as LBaaSv2 is currently Octavia delivery, not
> Neutron anymore and this project is going into deprecations stage.
> 
> Cheers,
> Lubosz
> 
>> On Mar 16, 2017, at 5:39 AM, zhi > > wrote:
>>
>> Hi, all
>> Currently, LBaaS v2 doesn't support migration. Just like router
>> instances, we can remove a router instance from one L3 agent and add
>> it to another L3 agent.
>>
>> So, there is a single point failure in LBaaS agent. As far as I know,
>> LBaaS supports " allow_automatic_lbaas_agent_failover ". But in many
>> cases, we want to migrate LBaaS instances manually. Do we plan to do this?
>>
>> I'm doing this right now. But I meet a question. I define a function
>> in agent_scheduler.py like this:
>>
>> def remove_loadbalancer_from_lbaas_agent(self, context, agent_id,
>> loadbalancer_id):
>> self._unschedule_loadbalancer(context, loadbalancer_id, agent_id)
>>
>> The question is, how do I notify LBaaS agent? 
>>
>> Hope for your reply.
>>
>>
>>
>> Thanks
>> Zhi Chang
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
SWITCH
Saverio Proto, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 1573
saverio.pr...@switch.ch, http://www.switch.ch

http://www.switch.ch/stories

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][deployment] Deployment Working Group (DWG)

2017-03-17 Thread j.kl...@cloudbau.de
+1 and big thanks for bringing this up!

OpenStack Chef will most likely use Ansible as an orchestrator for multi node 
deployments in the near future and i love the idea about sharing “common 
configuration options”.

Cheers,
Jan
On 14 March 2017 at 21:39:19, Emilien Macchi (emil...@redhat.com) wrote:

OpenStack community has been a welcoming place for Deployment tools.  
They bring great value to OpenStack because most of them are used to  
deploy OpenStack in production, versus a development environment.  

Over the last few years, deployment tools project have been trying  
to solve similar challenges. Recently we've seen some desire to  
collaborate, work on common topics and resolve issues seen by all  
these tools.  

Some examples of collaboration:  

* OpenStack Ansible and Puppet OpenStack have been collaborating on  
Continuous Integration scenarios but also on Nova upgrades orchestration.  
* TripleO and Kolla share the same tool for container builds.  
* TripleO and Fuel share the same Puppet OpenStack modules.  
* OpenStack and Kubernetes are interested in collaborating on configuration  
management.  
* Most of tools want to collect OpenStack parameters for configuration  
management in a common fashion.  
* [more]  

The big tent helped to make these projects part of OpenStack, but no official  
group was created to share common problems across tools until now.  

During the Atlanta Project Team Gathering in 2017 [1], most of current active  
deployment tools project team leaders met in a room and decided to start actual 
 
collaboration on different topics.  
This resolution is a first iteration of creating a new working group  
for Deployment Tools.  

The mission of the Deployment Working Group would be the following::  

To collaborate on best practices for deploying and configuring OpenStack  
in production environments.  


Note: in some cases, some challenges will be solved by adopting a technology  
or a tool. But sometimes, it won't happen because of the deployment tool  
background. This group would have to figure out how we can increase  
this adoption and converge to the same technologies eventually.  


For now, we'll use the wiki to document how we work together:  
https://wiki.openstack.org/wiki/Deployment  

The etherpad presented in [1] might be transformed in a Wiki page if  
needed but for now we expect people to update it.  

[1] https://etherpad.openstack.org/p/deployment-pike  


Let's make OpenStack deployments better and together ;-)  
Thanks,  
--  
Emilien Macchi  

__  
OpenStack Development Mailing List (not for usage questions)  
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] Reseller - do we need it?

2017-03-17 Thread Adrian Turjak
This actually sounds a lot like a domain per reseller, with a sub-domain per reseller customer. With the reseller themselves probably also running a sub-domain or two for themselves. Mayb even running their own external federated user system for that specific reseller domain.That or something like it could be doable. The reseller would be aware of the resources (likely to bill) and projects (since you would still likely bill project or at least tag invoice items per project), so the hidden project concept doesn't really fit.One thing that I do think is useful, and we've done for our cloud, is letting users see who exactly has access to their projects. For our Horizon we have a custom identity/access control panel that shows clearly who has access on your project and once I add on proper inheritance support will also list users who has inherited access for the project you are currently scoped to. This means a customer knows and can see when an admin has added themselves to their project to help debug something. Plus it even helps them in general manage their own user access better.I know we've been looking at the reseller model ourselves but haven't really gotten anywhere with it, partly because the margins didn't seem worth it, but also because the complexity of shoe-horning it around our existing openstack deployment. Domain per reseller (reseller as domain admin) and sub-domain per reseller customer (as sub-domain admin) I'm interested in!On 17 Mar. 2017 10:27 pm, Henry Nash  wrote:OK, so I worked on the original spec for this in Keystone, based around real world requirements we (IBM) saw.  For the record, here’s the particular goal we wanted to achieve:1) A cloud owner (i.e. the enterprise owns and maintains the core of the cloud), wants to attract more traffic by using third-party resellers or partners.2) Those resellers will sell “cloud” to their own customers and be responsible for finding & managing such customers. These resellers want to be able to onboard such customers and “hand them the admin keys” to they respective (conceptual) pieces of the cloud. I.e. Reseller X signs up Customer Y. Customer Y gets “keystone admin” for their bit of the (shared) cloud, and then can create and manage their own users without either the Reseller or the Overall cloud owner being involved. In keystone terms, each actual customer gets the equivalent of a domain, so that their users are separate from another other customers’ users across all resellers.3) Resellers will want to be able to have a view of all their customers (but ONLY their customers, not those of another reseller), e.g. assign quotas to each one…and make sure the overall quota for all their customers has not exceeded their own quota agreed with the overall cloud owner. Resellers may sell addiction services to their customers, e.g. act as support for problems, do backups whatever - things that might need them to have controlled access particular customers' pieces of the cloud - i.e. they might need roles (or at least some kind of access rights) on their customer’s cloud.4) In all of the above, by default, all hardware is shared and their are no dedicated endpoints (e.g. nova, neutron, keystone etc. are all shared), although such dedication should not be prevented should a customer want it.The above is somewhat analogous to how mobile virtual networks operators (MVNO) work - e.g. in the UK if I sign up for Sky Mobile, it is actually using the O2 network. As a Sky customer, I just know Sky - I’m not aware that Sky don’t own the network. Sky do own there on BSS/CRM systems - but they are not core network components. The idea was to provide someone similar for an OpenStack cloud provider, where they could support VCOs (Virtual Cloud Operators) on the their cloud.I agree there are multiple ways to provide such a capability - and it is often difficult to decide what should be within the “Openstack” scope, and what should be provided outside of it.HenryOn 16 Mar 2017, at 21:10, Lance Bragstad  wrote:Hey folks,The reseller use case [0] has been popping up frequently in various discussions [1], including unified limits.For those who are unfamiliar with the reseller concept, it came out of early discussions regarding hierarchical multi-tenancy (HMT). It essentially allows a certain level of opaqueness within project trees. This opaqueness would make it easier for providers to "resell" infrastructure, without having customers/providers see all the way up and down the project tree, hence it was termed reseller. Keystone originally had some ideas of how to implement this after the HMT implementation laid the ground work, but it was never finished.With it popping back up in conversations, I'm looking for folks who are willing to represent the idea. Participating in this thread doesn't mean you're on the hook for implementing it or anything like that. Are you interested in reseller and willing to provide use-cases?[0] 

[openstack-dev] [all] [quotas] Unified Limits Conceptual Spec RFC

2017-03-17 Thread Sean Dague
Background:

At the Atlanta PTG there was yet another attempt to get hierarchical
quotas more generally addressed in OpenStack. A proposal was put forward
that considered storing the limit information in Keystone
(https://review.openstack.org/#/c/363765/). While there were some
concerns on details that emerged out of that spec, the concept of the
move to Keystone was actually really well received in that room by a
wide range of parties, and it seemed to solve some interesting questions
around project hierarchy validation. We were perilously close to having
a path forward for a community request that's had a hard time making
progress over the last couple of years.

Let's keep that flame alive!


Here is the proposal for the Unified Limits in Keystone approach -
https://review.openstack.org/#/c/440815/. It is intentionally a high
level spec that largely lays out where the conceptual levels of control
will be. It intentionally does not talk about specific quota models
(there is a follow on that is doing some of that, under the assumption
that the exact model(s) supported will take a while, and that the
keystone interfaces are probably not going to substantially change based
on model).

We're shooting for a 2 week comment cycle here to then decide if we can
merge and move forward during this cycle or not. So please
comment/question now (either in the spec or here on the mailing list).

It is especially important that we get feedback from teams that have
limits implementations internally, as well as any that have started on
hierarchical limits/quotas (which I believe Cinder is the only one).

Thanks for your time, and look forward to seeing comments on this.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][fuel-ccp][openstack-helm][magnum][all] Deliverables of OpenStack projects for container runtimes

2017-03-17 Thread Davanum Srinivas
Team,

I am asking some questions here:
https://etherpad.openstack.org/p/single-approach-to-containers-for-projects

Main question is, Is it time to collaborate across teams?

Please add your name on top with the color you are using so it's easy
to track who said what.

Thanks,
Dims

-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] No meeting on 2017-03-22 (next week)

2017-03-17 Thread Rob Cresswell
Hey everyone,

I'm away for a week, so there'll be no Horizon meeting on the 22nd.

Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-23 and R-22, March 20-31

2017-03-17 Thread Thierry Carrez
Welcome back to our regular release countdown email. This one will serve
both for the R-23 and R-22 weeks.

Development Focus
-

Teams should be working on specs approval and implementation for
priority features for this cycle.

Actions
---

All project teams should be researching how they can meet the Pike
release goals [1]. Responses to those goals are due before the Pike-1
milestone, and are needed even if the work is already done. You can find
a refresher on the Goals process at [2].

[1] https://governance.openstack.org/tc/goals/pike/index.html
[2] https://governance.openstack.org/tc/goals/

Project teams that want to change their release model should also do so
before the Pike-1 milestone. This is done by proposing a change to the
Pike deliverable files at [3].

[3] http://git.openstack.org/cgit/openstack/releases/tree/deliverables/pike

The website to formally propose topics for discussion at the Forum in
Boston will be open on Monday, March 20. You will have until EOD April 2
to submit the result of your brainstorming there. Watch out for
announces early next week. In the mean time, you can continue the
brainstorming using etherpads at [4].

[4] https://wiki.openstack.org/wiki/Forum/Boston2017

Upcoming Deadlines & Dates
--

Boston Forum topic formal submission period: March 20 - April 2
Pike-1 milestone: April 13 (R-20 week)
Forum at OpenStack Summit in Boston: May 8-11

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] Reseller - do we need it?

2017-03-17 Thread Henry Nash
OK, so I worked on the original spec for this in Keystone, based around real 
world requirements we (IBM) saw.  For the record, here’s the particular goal we 
wanted to achieve:

1) A cloud owner (i.e. the enterprise owns and maintains the core of the 
cloud), wants to attract more traffic by using third-party resellers or 
partners.
2) Those resellers will sell “cloud” to their own customers and be responsible 
for finding & managing such customers. These resellers want to be able to 
onboard such customers and “hand them the admin keys” to they respective 
(conceptual) pieces of the cloud. I.e. Reseller X signs up Customer Y. Customer 
Y gets “keystone admin” for their bit of the (shared) cloud, and then can 
create and manage their own users without either the Reseller or the Overall 
cloud owner being involved. In keystone terms, each actual customer gets the 
equivalent of a domain, so that their users are separate from another other 
customers’ users across all resellers.
3) Resellers will want to be able to have a view of all their customers (but 
ONLY their customers, not those of another reseller), e.g. assign quotas to 
each one…and make sure the overall quota for all their customers has not 
exceeded their own quota agreed with the overall cloud owner. Resellers may 
sell addiction services to their customers, e.g. act as support for problems, 
do backups whatever - things that might need them to have controlled access 
particular customers' pieces of the cloud - i.e. they might need roles (or at 
least some kind of access rights) on their customer’s cloud.
4) In all of the above, by default, all hardware is shared and their are no 
dedicated endpoints (e.g. nova, neutron, keystone etc. are all shared), 
although such dedication should not be prevented should a customer want it.

The above is somewhat analogous to how mobile virtual networks operators (MVNO) 
work - e.g. in the UK if I sign up for Sky Mobile, it is actually using the O2 
network. As a Sky customer, I just know Sky - I’m not aware that Sky don’t own 
the network. Sky do own there on BSS/CRM systems - but they are not core 
network components. The idea was to provide someone similar for an OpenStack 
cloud provider, where they could support VCOs (Virtual Cloud Operators) on the 
their cloud.

I agree there are multiple ways to provide such a capability - and it is often 
difficult to decide what should be within the “Openstack” scope, and what 
should be provided outside of it.

Henry

> On 16 Mar 2017, at 21:10, Lance Bragstad  wrote:
> 
> Hey folks,
> 
> The reseller use case [0] has been popping up frequently in various 
> discussions [1], including unified limits.
> 
> For those who are unfamiliar with the reseller concept, it came out of early 
> discussions regarding hierarchical multi-tenancy (HMT). It essentially allows 
> a certain level of opaqueness within project trees. This opaqueness would 
> make it easier for providers to "resell" infrastructure, without having 
> customers/providers see all the way up and down the project tree, hence it 
> was termed reseller. Keystone originally had some ideas of how to implement 
> this after the HMT implementation laid the ground work, but it was never 
> finished.
> 
> With it popping back up in conversations, I'm looking for folks who are 
> willing to represent the idea. Participating in this thread doesn't mean 
> you're on the hook for implementing it or anything like that. 
> 
> Are you interested in reseller and willing to provide use-cases?
> 
> 
> 
> [0] 
> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/mitaka/reseller.html#problem-description
>  
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][vmware] CI results show up about a week late, if at all

2017-03-17 Thread Tracy Jones
Yes unfortunately I am aware of it and we are struggling to correct it.  We 
recently have some new infrastructure that we can add to help with this.  I 
have a meeting early next week with the team to make this available and I hope 
for a quickish resolution.  I’ll update you here with the plan.

In addition to adding infra, we are also looking to add some periodic “health 
check” patches to alert us when things are going awry.

We have people online but they are in china and Bulgaria – so it does not help 
during the US timezone.  I’ll ask more folks in the US to hang out (including 
me) and you can always ping me directly 
(tjo...@vmware.com).

Tracy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][kolla-kubernetes][helm] global value inconsistent cause install result unpredicatable

2017-03-17 Thread Hung Chung Chih
Hi all,
This is my first time to send mailing list, if I had made mistake please tell 
me, thank you.

When I try to helm install keystone service, I found it can installed succeeded 
but sometimes will failed with unexpected nil value for field 
spec.externalIPs[0]. Then I report bug at launchpad [1].

Then I found this issue was occurred because of there is one variable which 
exist at different micro service. For example, the 
global.kolla.keystone.all.port_external variable at keystone-itnernal-svc and 
keystone-public svc, one is true and the other is false. Before helm install 
package, it will catch all chart’s values inside package. However later char’s 
value will overwrite previous chart’s value. This will cause the final value 
will become unpredictable. This issue will not happened in normal helm use 
case. Because of usually subchart will have its own values and helm’s global 
value don’t support nesting yet [2].

In my view, it is a critical issue. Because of the install result will possibly 
be inconsistent between different install. I just suggest maybe me can add one 
test validate, which will validate whether there is any one variable that is 
conflict at different micro services in all_values.yaml

[1] https://bugs.launchpad.net/kolla-kubernetes/+bug/1670979 

[2] https://github.com/kubernetes/helm/blob/master/docs/charts.md 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] Reseller - do we need it?

2017-03-17 Thread Tim Bell
Lance,

I had understood that the resellers was about having users/groups at the 
different points in the tree.

I think the basic resource management is being looked at as part of the nested 
quotas functionality. For CERN, we’d look to delegate the quota and roles 
management but not support sub-tree user/groups.

Tim

From: Lance Bragstad 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, 17 March 2017 at 00:23
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [keystone][all] Reseller - do we need it?


On Thu, Mar 16, 2017 at 5:54 PM, Fox, Kevin M 
> wrote:
At our site, we have some larger projects that would be really nice if we could 
just give a main project all the resources they need, and let them suballocate 
it as their own internal subprojects needs change. Right now, we have to deal 
with all the subprojects directly. The reseller concept may fit this use case?

Sounds like this might also be solved by better RBAC that allows real project 
administrators to control their own subtrees. Is there a use case to limit 
visibility either up or down the tree? If not, would it be a nice-to-have?


Thanks,
Kevin

From: Lance Bragstad [lbrags...@gmail.com]
Sent: Thursday, March 16, 2017 2:10 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [keystone][all] Reseller - do we need it?
Hey folks,

The reseller use case [0] has been popping up frequently in various discussions 
[1], including unified limits.

For those who are unfamiliar with the reseller concept, it came out of early 
discussions regarding hierarchical multi-tenancy (HMT). It essentially allows a 
certain level of opaqueness within project trees. This opaqueness would make it 
easier for providers to "resell" infrastructure, without having 
customers/providers see all the way up and down the project tree, hence it was 
termed reseller. Keystone originally had some ideas of how to implement this 
after the HMT implementation laid the ground work, but it was never finished.

With it popping back up in conversations, I'm looking for folks who are willing 
to represent the idea. Participating in this thread doesn't mean you're on the 
hook for implementing it or anything like that.

Are you interested in reseller and willing to provide use-cases?



[0] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/mitaka/reseller.html#problem-description

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [dib][heat] dib-utils/dib-run-parts/dib v2 concern

2017-03-17 Thread Ian Wienand
On 03/17/2017 03:46 AM, Steven Hardy wrote:
> (undercloud) [stack@undercloud ~]$ rpm -qf /usr/bin/dib-run-parts
> dib-utils-0.0.11-1.el7.noarch
> (undercloud) [stack@undercloud ~]$ rpm -qf /bin/dib-run-parts
> dib-utils-0.0.11-1.el7.noarch

/bin is a link to /usr/bin?  So I think this is the same and this is
the dib-run-parts as pacakged by dib-utils.

> (undercloud) [stack@undercloud ~]$ rpm -qf 
> /usr/lib/python2.7/site-packages/diskimage_builder/lib/dib-run-parts
> diskimage-builder-2.0.1-0.20170314023517.756923c.el7.centos.noarch

This is dib's "private" copy.  As I mentioned in the other mail, the
intention was to vendor this so we could re-write for dib-specific
needs if need be (given future requirements such as running in
restricted container environments).  I believe having dib exporting
this was an (my) oversight.  I have proposed [1] to remove this.

> (undercloud) [stack@undercloud ~]$ rpm -qf /usr/local/bin/dib-run-parts
> file /usr/local/bin/dib-run-parts is not owned by any package

This would come from the image build process.  We copy dib-run-parts
into the chroot to run in-target scripts [2] but we never actually
remove it.  This seems to me to also be a bug and I've proposed [3] to
run this out of /tmp and clean it up.

> But the exciting thing from a rolling-out-bugfixes perspective is that the
> one actually running via o-r-c isn't either of the packaged versions (doh!)
> so we probably need to track down which element is installing it.
>
> This is a little OT for this thread (sorry), but hopefully provides more
> context around my concerns about creating another fork etc.

I don't want us to get a little too "left-pad" [4] with this 95ish
lines of shell :) I think this stack clears things up.

tl;dr

 - dib version should be vendored; not in path & not exported [1]
 - unnecessary /usr/local/bin version removed [3]
 - dib-utils provides /usr/bin/ version

Cross-ports between the vendored dib version and dib-utils should be
trivial given what it is.  If dib wants to rewrite it's vendored
version, or remove it completely, this will not affect anyone
depending on dib-utils.

Thanks,

-i

[1] https://review.openstack.org/446285 (dib: do not provide dib-run-parts)
[2] 
https://git.openstack.org/cgit/openstack/diskimage-builder/tree/diskimage_builder/elements/dib-run-parts/root.d/90-base-dib-run-parts
[3] https://review.openstack.org/446769 (dib: run chroot dib-run-parts out of 
/tmp)
[4] http://left-pad.io/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev