[openstack-dev] Proposal: The OpenStack Client Library Guide

2018-04-05 Thread Adrian Turjak
Hello fellow OpenStackers,

As some of you have probably heard me rant, I've been thinking about how
to better solve the problem with various tools that support OpenStack or
are meant to be OpenStack clients/tools which don't always work as
expected by those of us directly in the community.

Mostly around things like auth and variable name conventions, and things
which often there should really be consistency and overlap.

The example that most recently triggered this discussion was how
OpenStackClient (and os-client-config) supports certain elements of
clouds.yaml and ENVVAR config, while Terraform supports it differently.
Both you'd often run on the cli and often both in the same terminal, so
it is always weird when certain auth and scoping values don't work the
same. This is being worked on, but little problems like this an an
ongoing problem.

The proposal, write an authoritative guide/spec on the basics of
implementing a client library or tool for any given language that talks
to OpenStack.

Elements we ought to cover:
- How all the various auth methods in Keystone work, how the whole authn
and authz process works with Keystone, and how to actually use it to do
what you want.
- What common client configuration options exist and how they work
(common variable names, ENVVARs, clouds.yaml), with something like
common ENVVARs documented and a list maintained so there is one
definitive source for what to expect people to be using.
- Per project guides on how the API might act that helps facilitate
starting to write code against it beyond just the API reference, and
examples of what to expect. Not exactly a duplicate of the API ref, but
more a 'common pitfalls and confusing elements to be ware of' section
that builds on the API ref of each project.

There are likely other things we want to include, and we need to work
out what those are, but ideally this should be a new documentation
focused project which will result in useful guide on what someone needs
to take any programming language, and write a library that works as we
expect it should against OpenStack. Such a guide would also help any
existing libraries ensure they themselves do fully understand and use
the OpenStack auth and service APIs as expected. It should also help to
ensure programmers working across multiple languages and systems have a
much easier time interacting with all the various libraries they might
touch.

A lot of this knowledge exists, but it's hard to parse and not well
documented. We have reference implementations of it all in the likes of
OpenStackClient, Keystoneauth1, and the OpenStackSDK itself (which
os-client-config is now a part of), but what we need is a language
agnostic guide rather than the assumption that people will read the code
of our official projects. Even the API ref itself isn't entirely helpful
since in a lot of cases it only covers the most basic of examples for
each API.

There appears to be interest in something like this, so lets start with
a mailing list discussion, and potentially turn it into something more
official if this leads anywhere useful. :)

Cheers,
Adrian


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cyborg] Promote Li Liu as new core reviewer

2018-04-05 Thread Zhipeng Huang
Hi Team,

This is an email for my nomination of adding Li Liu to the core reviewer
team. Li Liu has been instrumental in the resource provider data model
implementation for Cyborg during Queens release, as well as metadata
standardization and programming design for Rocky.

His overall stats [0] and current stats [1] for Rocky speaks for itself.
His patches could be found here [2].

Given the amount of work undergoing for Rocky, it would be great to add
such an amazing force :)

[0]
http://stackalytics.com/?module=cyborg-group=person-day=all
[1]
http://stackalytics.com/?module=cyborg-group=person-day=rocky
[2] https://review.openstack.org/#/q/owner:liliueecg%2540gmail.com

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] Adding a SDK to developer.openstack.org pages

2018-04-05 Thread Gilles Dubreuil

Hi,

I'd like to update the developer.openstack.org to add details about a 
new SDK.


What would be the corresponding repo? My searches landed me into 
https://docs.openstack.org/doc-contrib-guide/ which is about updating 
the docs.openstack.org but not developer.openstack.org. Is the developer 
section inside the docs section?


Thanks,
Gilles


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [bifrost][helm][OSA][barbican] Switching from fedora-26 to fedora-27

2018-04-05 Thread Paul Belanger
On Wed, Apr 04, 2018 at 11:27:34AM -0400, Paul Belanger wrote:
> On Tue, Mar 13, 2018 at 10:54:26AM -0400, Paul Belanger wrote:
> > On Mon, Mar 05, 2018 at 06:45:13PM -0500, Paul Belanger wrote:
> > > Greetings,
> > > 
> > > A quick search of git shows your projects are using fedora-26 nodes for 
> > > testing.
> > > Please take a moment to look at gerrit[1] and help land patches.  We'd 
> > > like to
> > > remove fedora-26 nodes in the next week and to avoid broken jobs you'll 
> > > need to
> > > approve these patches.
> > > 
> > > If you jobs are failing under fedora-27, please take the time to fix any 
> > > issue
> > > or update said patches to make them non-voting.
> > > 
> > > We (openstack-infra) aim to only keep the latest fedora image online, 
> > > which
> > > changes aprox every 6 months.
> > > 
> > > Thanks for your help and understanding,
> > > Paul
> > > 
> > > [1] https://review.openstack.org/#/q/topic:fedora-27+status:open
> > > 
> > Greetings,
> > 
> > This is a friendly reminder, about moving jobs to fedora-27. I'd like to 
> > remove
> > our fedora-26 images next week and if jobs haven't been migrated you may 
> > start
> > to see NODE_FAILURE messages while running jobs.  Please take a moment to 
> > merge
> > the open changes or update them to be non-voting while you work on fixes.
> > 
> > Thanks again,
> > Paul
> > 
> Hi,
> 
> It's been a month since we started asking projects to migrate to fedora-26.
> 
> I've proposed the patch to review fedora-26 nodes from nodepool[2], if your
> project hasn't merge the patches above you will start to see NODE_FAILURE
> results for your jobs. Please take the time to approve the changes above.
> 
> Because new fedora images come online every 6 months, we like to only keep one
> of them online at any given time. Fedora is meant to be a fast moving distro 
> to
> pick up new versions of software out side of the Ubuntu LTS releases.
> 
> If you have any questions please reach out to us in #openstack-infra.
> 
> Thanks,
> Paul
> 
> [2] https://review.openstack.org/558847/
> 
We've just landed the patch, fedora-26 images are now removed. If you haven't
upgraded your jobs to fedora-27, you'll now start setting NODE_FAILURE return by
zuul.

If you have any questions please reach out to us in #openstack-infra.

Thanks,
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][qa][requirements] Pip 10 is on the way

2018-04-05 Thread Paul Belanger
On Thu, Apr 05, 2018 at 01:27:13PM -0700, Clark Boylan wrote:
> On Mon, Apr 2, 2018, at 9:13 AM, Clark Boylan wrote:
> > On Mon, Apr 2, 2018, at 8:06 AM, Matthew Thode wrote:
> > > On 18-03-31 15:00:27, Jeremy Stanley wrote:
> > > > According to a notice[1] posted to the pypa-announce and
> > > > distutils-sig mailing lists, pip 10.0.0.b1 is on PyPI now and 10.0.0
> > > > is expected to be released in two weeks (over the April 14/15
> > > > weekend). We know it's at least going to start breaking[2] DevStack
> > > > and we need to come up with a plan for addressing that, but we don't
> > > > know how much more widespread the problem might end up being so
> > > > encourage everyone to try it out now where they can.
> > > > 
> > > 
> > > I'd like to suggest locking down pip/setuptools/wheel like openstack
> > > ansible is doing in 
> > > https://github.com/openstack/openstack-ansible/blob/master/global-requirement-pins.txt
> > > 
> > > We could maintain it as a separate constraints file (or infra could
> > > maintian it, doesn't mater).  The file would only be used for the
> > > initial get-pip install.
> > 
> > In the past we've done our best to avoid pinning these tools because 1) 
> > we've told people they should use latest for openstack to work and 2) it 
> > is really difficult to actually control what versions of these tools end 
> > up on your systems if not latest.
> > 
> > I would strongly push towards addressing the distutils package deletion 
> > problem that we've run into with pip10 instead. One of the approaches 
> > thrown out that pabelanger is working on is to use a common virtualenv 
> > for devstack and avoid the system package conflict entirely.
> 
> I was mistaken and pabelanger was working to get devstack's USE_VENV option 
> working which installs each service (if the service supports it) into its own 
> virtualenv. There are two big drawbacks to this. This first is that we would 
> lose coinstallation of all the openstack services which is one way we ensure 
> they all work together at the end of the day. The second is that not all 
> services in "base" devstack support USE_VENV and I doubt many plugins do 
> either (neutron apparently doesn't?).
> 
Yah, I agree your approach is the better, i just wanted to toggle what was
supported by default. However, it is pretty broken today.  I can't imagine
anybody actually using it, if so they must be carrying downstream patches.

If we think USE_VENV is valid use case, for per project VENV, I suggest we
continue to fix it and update neutron to support it.  Otherwise, we maybe should
rip and replace it.

Paul

> I've since worked out a change that passes tempest using a global virtualenv 
> installed devstack at https://review.openstack.org/#/c/558930/. This needs to 
> be cleaned up so that we only check for and install the virtualenv(s) once 
> and we need to handle mixed python2 and python3 environments better (so that 
> you can run a python2 swift and python3 everything else).
> 
> The other major issue we've run into is that nova file injection (which is 
> tested by tempest) seems to require either libguestfs or nbd. libguestfs 
> bindings for python aren't available on pypi and instead we get them from 
> system packaging. This means if we want libguestfs support we have to enable 
> system site packages when using virtualenvs. The alternative is to use nbd 
> which apparently isn't preferred by nova and doesn't work under current 
> devstack anyways.
> 
> Why is this a problem? Well the new pip10 behavior that breaks devstack is 
> pip10's refusable to remove distutils installed packages. Distro packages by 
> and large are distutils packaged which means if you mix system packages and 
> pip installed packages there is a good chance something will break (and it 
> does break for current devstack). I'm not sure that using a virtualenv with 
> system site packages enabled will sufficiently protect us from this case (but 
> we should test it further). Also it feels wrong to enable system packages in 
> a virtualenv if the entire point is avoiding system python packages.
> 
> I'm not sure what the best option is here but if we can show that system site 
> packages with virtualenvs is viable with pip10 and people want to move 
> forward with devstack using a global virtualenv we can work to clean up this 
> change and make it mergeable.
> 
> Clark
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] zun-api error

2018-04-05 Thread Murali B
Hi Hongbin,

Thank you for your help

As per the our discussion here is the output for my current api on pike. I
am not sure which version of zun client  client  I should use for pike

root@cluster3-2:~/python-zunclient# zun service-list
ERROR: Not Acceptable (HTTP 406) (Request-ID:
req-be69266e-b641-44b9-9739-0c2d050f18b3)
root@cluster3-2:~/python-zunclient# zun --debug service-list
DEBUG (extension:180) found extension EntryPoint.parse('vitrage-keycloak =
vitrageclient.auth:VitrageKeycloakLoader')
DEBUG (extension:180) found extension EntryPoint.parse('vitrage-noauth =
vitrageclient.auth:VitrageNoAuthLoader')
DEBUG (extension:180) found extension EntryPoint.parse('noauth =
cinderclient.contrib.noauth:CinderNoAuthLoader')
DEBUG (extension:180) found extension EntryPoint.parse('v2token =
keystoneauth1.loading._plugins.identity.v2:Token')
DEBUG (extension:180) found extension EntryPoint.parse('none =
keystoneauth1.loading._plugins.noauth:NoAuth')
DEBUG (extension:180) found extension EntryPoint.parse('v3oauth1 =
keystoneauth1.extras.oauth1._loading:V3OAuth1')
DEBUG (extension:180) found extension EntryPoint.parse('admin_token =
keystoneauth1.loading._plugins.admin_token:AdminToken')
DEBUG (extension:180) found extension EntryPoint.parse('v3oidcauthcode =
keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAuthorizationCode')
DEBUG (extension:180) found extension EntryPoint.parse('v2password =
keystoneauth1.loading._plugins.identity.v2:Password')
DEBUG (extension:180) found extension EntryPoint.parse('v3samlpassword =
keystoneauth1.extras._saml2._loading:Saml2Password')
DEBUG (extension:180) found extension EntryPoint.parse('v3password =
keystoneauth1.loading._plugins.identity.v3:Password')
DEBUG (extension:180) found extension EntryPoint.parse('v3adfspassword =
keystoneauth1.extras._saml2._loading:ADFSPassword')
DEBUG (extension:180) found extension EntryPoint.parse('v3oidcaccesstoken =
keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAccessToken')
DEBUG (extension:180) found extension EntryPoint.parse('v3oidcpassword =
keystoneauth1.loading._plugins.identity.v3:OpenIDConnectPassword')
DEBUG (extension:180) found extension EntryPoint.parse('v3kerberos =
keystoneauth1.extras.kerberos._loading:Kerberos')
DEBUG (extension:180) found extension EntryPoint.parse('token =
keystoneauth1.loading._plugins.identity.generic:Token')
DEBUG (extension:180) found extension
EntryPoint.parse('v3oidcclientcredentials =
keystoneauth1.loading._plugins.identity.v3:OpenIDConnectClientCredentials')
DEBUG (extension:180) found extension EntryPoint.parse('v3tokenlessauth =
keystoneauth1.loading._plugins.identity.v3:TokenlessAuth')
DEBUG (extension:180) found extension EntryPoint.parse('v3token =
keystoneauth1.loading._plugins.identity.v3:Token')
DEBUG (extension:180) found extension EntryPoint.parse('v3totp =
keystoneauth1.loading._plugins.identity.v3:TOTP')
DEBUG (extension:180) found extension
EntryPoint.parse('v3applicationcredential =
keystoneauth1.loading._plugins.identity.v3:ApplicationCredential')
DEBUG (extension:180) found extension EntryPoint.parse('password =
keystoneauth1.loading._plugins.identity.generic:Password')
DEBUG (extension:180) found extension EntryPoint.parse('v3fedkerb =
keystoneauth1.extras.kerberos._loading:MappedKerberos')
DEBUG (extension:180) found extension EntryPoint.parse('v1password =
swiftclient.authv1:PasswordLoader')
DEBUG (extension:180) found extension EntryPoint.parse('token_endpoint =
openstackclient.api.auth_plugin:TokenEndpoint')
DEBUG (extension:180) found extension EntryPoint.parse('gnocchi-basic =
gnocchiclient.auth:GnocchiBasicLoader')
DEBUG (extension:180) found extension EntryPoint.parse('gnocchi-noauth =
gnocchiclient.auth:GnocchiNoAuthLoader')
DEBUG (extension:180) found extension EntryPoint.parse('aodh-noauth =
aodhclient.noauth:AodhNoAuthLoader')
DEBUG (session:372) REQ: curl -g -i -X GET http://ubuntu16:35357/v3 -H
"Accept: application/json" -H "User-Agent: zun keystoneauth1/3.4.0
python-requests/2.18.1 CPython/2.7.12"
DEBUG (connectionpool:207) Starting new HTTP connection (1): ubuntu16
DEBUG (connectionpool:395) http://ubuntu16:35357 "GET /v3 HTTP/1.1" 200 248
DEBUG (session:419) RESP: [200] Date: Thu, 05 Apr 2018 23:11:07 GMT Server:
Apache/2.4.18 (Ubuntu) Vary: X-Auth-Token X-Distribution: Ubuntu
x-openstack-request-id: req-3b1a12cc-fb3f-4d05-87fc-d2a1ff43395c
Content-Length: 248 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive
Content-Type: application/json
RESP BODY: {"version": {"status": "stable", "updated":
"2017-02-22T00:00:00Z", "media-types": [{"base": "application/json",
"type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.8",
"links": [{"href": "http://ubuntu16:35357/v3/;, "rel": "self"}]}}

DEBUG (session:722) GET call to None for http://ubuntu16:35357/v3 used
request id req-3b1a12cc-fb3f-4d05-87fc-d2a1ff43395c
DEBUG (base:175) Making authentication request to
http://ubuntu16:35357/v3/auth/tokens
DEBUG (connectionpool:395) 

Re: [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release

2018-04-05 Thread Matt Riedemann

On 4/5/2018 3:32 PM, Thomas Goirand wrote:

If you don't absolutely need new features from libvirt 3.2.0 and 3.0.0
is fine, please choose 3.0.0 as minimum.

If you don't absolutely need new features from qemu 2.9.0 and 2.8.0 is
fine, please choose 2.8.0 as minimum.

If you don't absolutely need new features from libguestfs 1.36 and 1.34
is fine, please choose 1.34 as minimum.


New features in the libvirt driver which depend on minimum versions of 
libvirt/qemu/libguestfs (or arch for that matter) are always 
conditional, so I think it's reasonable to go with the lower bound for 
Debian. We can still support the features for the newer versions if 
you're running a system with those versions, but not penalize people 
with slightly older versions if not.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] heads up, tox -e[pep|fast]8 defaulting to python3

2018-04-05 Thread melanie witt

Howdy everyone,

We recently updated the tox pep8 and fast8 environments to default to 
using python3 [0] because it has stricter checks and we wanted to make 
sure we don't let pep8 errors get through the CI gate [1].


Because of this, you'll need the python3 and python3-dev packages in 
your environment in order to run tox -e[pep|fast]8.


Thanks,
-melanie

[0] https://review.openstack.org/#/c/558648
[1] 
http://lists.openstack.org/pipermail/openstack-dev/2018-April/129025.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][qa][requirements] Pip 10 is on the way

2018-04-05 Thread Clark Boylan
On Thu, Apr 5, 2018, at 1:27 PM, Clark Boylan wrote:
> The other major issue we've run into is that nova file injection (which 
> is tested by tempest) seems to require either libguestfs or nbd. 
> libguestfs bindings for python aren't available on pypi and instead we 
> get them from system packaging. This means if we want libguestfs support 
> we have to enable system site packages when using virtualenvs. The 
> alternative is to use nbd which apparently isn't preferred by nova and 
> doesn't work under current devstack anyways.
> 
> Why is this a problem? Well the new pip10 behavior that breaks devstack 
> is pip10's refusable to remove distutils installed packages. Distro 
> packages by and large are distutils packaged which means if you mix 
> system packages and pip installed packages there is a good chance 
> something will break (and it does break for current devstack). I'm not 
> sure that using a virtualenv with system site packages enabled will 
> sufficiently protect us from this case (but we should test it further). 
> Also it feels wrong to enable system packages in a virtualenv if the 
> entire point is avoiding system python packages.

Good news everyone, 
http://logs.openstack.org/74/559174/1/check/tempest-full-py3/4c5548f/job-output.txt.gz#_2018-04-05_21_26_36_669943
 shows the pip10 appears to do the right thing with a virtualenv using 
system-site-package option when attempting to install a newer version of a 
package that would require being deleted if done on the system python proper.

It determines there is an existing package, that it is outside the env and it 
cannot uninstall it, then installs a newer version of the package anyways.

If you look later in the job run you'll see it fails in the system python 
context on this same package, 
http://logs.openstack.org/74/559174/1/check/tempest-full-py3/4c5548f/job-output.txt.gz#_2018-04-05_21_29_31_399895.

I think that means this is a viable workaround for us even if it isn't ideal.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][qa][requirements] Pip 10 is on the way

2018-04-05 Thread Luigi Toscano
On Thursday, 5 April 2018 22:44:32 CEST Jeremy Stanley wrote:
> On 2018-04-05 13:27:13 -0700 (-0700), Clark Boylan wrote:
> [...]
> 
> > I'm not sure what the best option is here but if we can show that
> > system site packages with virtualenvs is viable with pip10 and
> > people want to move forward with devstack using a global
> > virtualenv we can work to clean up this change and make it
> > mergeable.
> 
> Ideally, someone convinces the libguestfs authors of the benefits of
> putting sdist/wheel builds of their python module on PyPI like
> (eventually) happened with libvirt.

It may be not trivial:
https://bugzilla.redhat.com/show_bug.cgi?id=1075594

On the other side, not being able to use system site packages with a 
virtualenv does not sound good either.

Ciao
-- 
Luigi



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][qa][requirements] Pip 10 is on the way

2018-04-05 Thread Jeremy Stanley
On 2018-04-05 13:27:13 -0700 (-0700), Clark Boylan wrote:
[...]
> I'm not sure what the best option is here but if we can show that
> system site packages with virtualenvs is viable with pip10 and
> people want to move forward with devstack using a global
> virtualenv we can work to clean up this change and make it
> mergeable.

Ideally, someone convinces the libguestfs authors of the benefits of
putting sdist/wheel builds of their python module on PyPI like
(eventually) happened with libvirt.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] [tripleo] On moving start scripts out of Kolla images

2018-04-05 Thread Michał Jastrzębski
So I'll re-iterate comment which I made in BCN. In previous thread we
praised how Kolla provided stable API for images, and I agree that it
was great design choice (to provide stable API, not necessarily how
API looks), and this change would break it. So *if* we decide to do
it, we need to follow deprecation, that means we could deprecate these
files in this release and start removing them in next.

Support for LOCI in kolla-ansible is good thing, but I don't think
changing Kolla image API is required for that. LOCI provides base
image arument, so we could simply create base-image with all the
extended-start and set-config mechanisms and some shim to source
extended-start script that belongs to particular container. We will
need kolla layer image anyway because set_config is there to stay (as
Martin pointed out it's valuable tool fixing real issue and it's used
by more projects than just kolla-ansible). We could add another script
that would look like extended_start.sh -> source
$CONTAINER_NAME-extended-start.sh and copy all kolla's extended start
scripts to dir with proper naming (I believe this is solution that Sam
came up with shortly after BCN). This is purely techincal and not that
hard to do, much quicker and easier than deprecating API...

On 5 April 2018 at 12:28, Martin André  wrote:
> On Thu, Apr 5, 2018 at 2:16 PM, Paul Bourke  wrote:
>> Hi all,
>>
>> This mail is to serve as a follow on to the discussion during yesterday's
>> team meeting[4], which was regarding the desire to move start scripts out of
>> the kolla images [0]. There's a few factors at play, and it may well be best
>> left to discuss in person at the summit in May, but hopefully we can get at
>> least some of this hashed out before then.
>>
>> I'll start by summarising why I think this is a good idea, and then attempt
>> to address some of the concerns that have come up since.
>>
>> First off, to be frank, this is effort is driven by wanting to add support
>> for loci images[1] in kolla-ansible. I think it would be unreasonable for
>> anyone to argue this is a bad objective to have, loci images have very
>> obvious benefits over what we have in Kolla today. I'm not looking to drop
>> support for Kolla images at all, I simply want to continue decoupling things
>> to the point where operators can pick and choose what works best for them.
>> Stemming from this, I think moving these scripts out of the images provides
>> a clear benefit to our consumers, both users of kolla and third parties such
>> as triple-o. Let me explain why.
>
> It's still very obscure to me how removing the scripts from kolla
> images will benefit consumers. If the reason is that you want to
> re-use them in other, non-kolla images, I believe we should package
> the scripts. I've left some comments in your spec review.
>
>> Normally, to run a docker image, a user will do 'docker run
>> helloworld:latest'. In any non trivial application, config needs to be
>> provided. In the vast majority of cases this is either provided via a bind
>> mount (docker run -v hello.conf:/etc/hello.conf helloworld:latest), or via
>> environment variables (docker run --env HELLO=paul helloworld:latest). This
>> is all bog standard stuff, something anyone who's spent an hour learning
>> docker can understand.
>>
>> Now, lets say someone wants to try out OpenStack with Docker, and they look
>> at Kolla. First off they have to look at something called set_configs.py[2]
>> - over 400 lines of Python. Next they need to understand what that script
>> consumes, config.json [3]. The only reference for config.json is the files
>> that live in kolla-ansible, a mass of jinja and assumptions about how the
>> service will be run. Next, they need to figure out how to bind mount the
>> config files and config.json into the container in a way that can be
>> consumed by set_configs.py (which by the way, requires the base kolla image
>> in all cases). This is only for the config. For the service start up
>> command, this need to also be provided in config.json. This command is then
>> parsed out and written to a location in the image, which is consumed by a
>> series of start/extend start shell scripts. Kolla is *unique* in this
>> regard, no other project in the container world is interfacing with images
>> in this way. Being a snowflake in this regard is not a good thing. I'm still
>> waiting to hear from a real world operator who would prefer to spend time
>> learning the above to doing:
>
> You're pointing a very real documentation issue. I've mentioned in the
> other kolla thread that I have a stub for the kolla API documentation.
> I'll push a patch for what I have and we can iterate on that.
>
>>   docker run -v /etc/keystone:/etc/keystone keystone:latest --entrypoint
>> /usr/bin/keystone [args]
>>
>> This is the Docker API, it's easy to understand and pretty much the standard
>> at this point.
>
> Sure, using the docker API works for simpler cases, 

Re: [openstack-dev] Following the new PTI for document build, broken local builds

2018-04-05 Thread Zane Bitter

On 21/03/18 06:49, Stephen Finucane wrote:

As noted by Monty in a prior openstack-dev post [2], some projects rely
on a pbr extension to the 'build_sphinx' setuptools command which can
automatically run the 'sphinx-apidoc' tool before building docs. This
is enabled by configuring some settings in the '[pbr]' section of the
'setup.cfg' file [3]. To ensure this continued working, the zuul jobs
definitions [4] check for the presence of these settings and build docs
using the legacy 'build_sphinx' command if found. **At no point do the
jobs call the tox job**. As a result, if you convert a project to use
'sphinx-build' in 'tox.ini' without resolving the autodoc issues, you
lose the ability to build docs locally.

I've gone through and proposed a couple of reverts to fix projects
we've already broken. However, going forward, there are two things
people should do to prevent issues like this popping up.

  * Firstly, you should remove the '[build_sphinx]' and '[pbr]' sections
from 'setup.cfg' in any patches that aim to convert a project to use
the new PTI. This will ensure the gate catches any potential
issues.


How can we enable warning_is_error in the gate with the new PTI? It's 
easy enough to add the -W flag in tox.ini for local builds, but as you 
say the tox job is never called in the gate. In the gate zuul checks for 
it in the [build_sphinx] section of setup.cfg:


https://git.openstack.org/cgit/openstack-infra/zuul-jobs/tree/roles/sphinx/library/sphinx_check_warning_is_error.py#n23

So I think it makes more sense to remove the [pbr] section, but leave 
the [build_sphinx] section?


thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release

2018-04-05 Thread Thomas Goirand
On 04/04/2018 10:45 AM, Kashyap Chamarthy wrote:
> Answering my own questions about Debian --
> 
> From looking at the Debian Archive[1][2], these are the versions for
> 'Stretch' (the current stable release) and in the upcoming 'Buster'
> release:
> 
> libvirt | 3.0.0-4+deb9u2 | stretch
> libvirt | 4.1.0-2| buster
> 
> qemu| 1:2.8+dfsg-6+deb9u3| stretch
> qemu| 1:2.11+dfsg-1  | buster
> 
> I also talked on #debian-backports IRC channel on OFTC network, where I
> asked: 
> 
> "What I'm essentially looking for is: "How can 'stretch' users get
> libvirt 3.2.0 and QEMU 2.9.0, even if via a different repository.
> As they are proposed to be least common denominator versions across
> distributions."
> 
> And two people said: Then the versions from 'Buster' could be backported
> to 'stretch-backports'.  The process for that is to: "ask the maintainer
> of those package and Cc to the backports mailing list."
> 
> Any takers?
> 
> [0] https://packages.debian.org/stretch-backports/
> [1] https://qa.debian.org/madison.php?package=libvirt
> [2] https://qa.debian.org/madison.php?package=qemu

Hi Kashyap,

Thanks for your considering of Debian, asking me and giving enough time
for answering! Here's my thoughts.

I updated the wiki page as you suggested [1]. As i wrote on IRC, we
don't need to care about Jessie, so I removed Jessie, and added Buster/SID.

tl;dr: just skip this section & go to conclusion

backport of libvirt/QEMU/libguestfs more in details
---

I already attempted the backports from Debian Buster to Stretch.

All of the 3 components (libvirt, qemu & libguestfs) could be built
without extra dependency, which is a very good thing.

- libvirt 4.1.0 compiled without issue, though the dh_install phase
failed with this error:

dh_install: Cannot find (any matches for) "/usr/lib/*/wireshark/" (tried
in "." and "debian/tmp")
dh_install: libvirt-wireshark missing files: /usr/lib/*/wireshark/
dh_install: missing files, aborting

Without more investigation but this build log, it's likely a minor fix
in debian/*.install files to make it possible to backport the package.

- qemu 2.11 built perfectly with zero change.

- libguestfs 1.36.13 only needed to have fdisk replaced by util-linux as
build-depends (fdisk is now a separate package in Buster).

So it looks like easy to backport these 3 *AT THIS TIME*. [2]

However, without a cristal ball, nobody can tell how hard it will be to
backport these *IN A YEAR FROM NOW*.

Conclusion:
---

If you don't absolutely need new features from libvirt 3.2.0 and 3.0.0
is fine, please choose 3.0.0 as minimum.

If you don't absolutely need new features from qemu 2.9.0 and 2.8.0 is
fine, please choose 2.8.0 as minimum.

If you don't absolutely need new features from libguestfs 1.36 and 1.34
is fine, please choose 1.34 as minimum.

If you do need these new features, I'll do my best adapt. :)

About Buster freeze & OpenStack Stein backports to Debian Stretch
-

Now, about Buster. As you know, Debian doesn't have planned release
dates. Though here's the stats showing that roughly, there's a new
Debian every 2 years, and the freeze takes about 6 months.

https://wiki.debian.org/DebianReleases#Release_statistics

With this logic and considering Stretch was released last year in June,
after Stein is released, Buster will probably start its freeze. If the
Debian freeze happens later, good for me, I'll have more time to make
Stein better. But then Debian users will probably expect an OpenStack
Stein backport to Debian Stretch, and that's where it can become tricky
to backport these 3 packages.

The end
---

I hope the above isn't too long, and helps to take the best decision,
Cheers,

Thomas Goirand (zigo)

[1]
https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix#Distro_minimum_versions

[2] I'm not shouting, just highlighting the important part! :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][qa][requirements] Pip 10 is on the way

2018-04-05 Thread Clark Boylan
On Mon, Apr 2, 2018, at 9:13 AM, Clark Boylan wrote:
> On Mon, Apr 2, 2018, at 8:06 AM, Matthew Thode wrote:
> > On 18-03-31 15:00:27, Jeremy Stanley wrote:
> > > According to a notice[1] posted to the pypa-announce and
> > > distutils-sig mailing lists, pip 10.0.0.b1 is on PyPI now and 10.0.0
> > > is expected to be released in two weeks (over the April 14/15
> > > weekend). We know it's at least going to start breaking[2] DevStack
> > > and we need to come up with a plan for addressing that, but we don't
> > > know how much more widespread the problem might end up being so
> > > encourage everyone to try it out now where they can.
> > > 
> > 
> > I'd like to suggest locking down pip/setuptools/wheel like openstack
> > ansible is doing in 
> > https://github.com/openstack/openstack-ansible/blob/master/global-requirement-pins.txt
> > 
> > We could maintain it as a separate constraints file (or infra could
> > maintian it, doesn't mater).  The file would only be used for the
> > initial get-pip install.
> 
> In the past we've done our best to avoid pinning these tools because 1) 
> we've told people they should use latest for openstack to work and 2) it 
> is really difficult to actually control what versions of these tools end 
> up on your systems if not latest.
> 
> I would strongly push towards addressing the distutils package deletion 
> problem that we've run into with pip10 instead. One of the approaches 
> thrown out that pabelanger is working on is to use a common virtualenv 
> for devstack and avoid the system package conflict entirely.

I was mistaken and pabelanger was working to get devstack's USE_VENV option 
working which installs each service (if the service supports it) into its own 
virtualenv. There are two big drawbacks to this. This first is that we would 
lose coinstallation of all the openstack services which is one way we ensure 
they all work together at the end of the day. The second is that not all 
services in "base" devstack support USE_VENV and I doubt many plugins do either 
(neutron apparently doesn't?).

I've since worked out a change that passes tempest using a global virtualenv 
installed devstack at https://review.openstack.org/#/c/558930/. This needs to 
be cleaned up so that we only check for and install the virtualenv(s) once and 
we need to handle mixed python2 and python3 environments better (so that you 
can run a python2 swift and python3 everything else).

The other major issue we've run into is that nova file injection (which is 
tested by tempest) seems to require either libguestfs or nbd. libguestfs 
bindings for python aren't available on pypi and instead we get them from 
system packaging. This means if we want libguestfs support we have to enable 
system site packages when using virtualenvs. The alternative is to use nbd 
which apparently isn't preferred by nova and doesn't work under current 
devstack anyways.

Why is this a problem? Well the new pip10 behavior that breaks devstack is 
pip10's refusable to remove distutils installed packages. Distro packages by 
and large are distutils packaged which means if you mix system packages and pip 
installed packages there is a good chance something will break (and it does 
break for current devstack). I'm not sure that using a virtualenv with system 
site packages enabled will sufficiently protect us from this case (but we 
should test it further). Also it feels wrong to enable system packages in a 
virtualenv if the entire point is avoiding system python packages.

I'm not sure what the best option is here but if we can show that system site 
packages with virtualenvs is viable with pip10 and people want to move forward 
with devstack using a global virtualenv we can work to clean up this change and 
make it mergeable.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] [tripleo] On moving start scripts out of Kolla images

2018-04-05 Thread Martin André
On Thu, Apr 5, 2018 at 2:16 PM, Paul Bourke  wrote:
> Hi all,
>
> This mail is to serve as a follow on to the discussion during yesterday's
> team meeting[4], which was regarding the desire to move start scripts out of
> the kolla images [0]. There's a few factors at play, and it may well be best
> left to discuss in person at the summit in May, but hopefully we can get at
> least some of this hashed out before then.
>
> I'll start by summarising why I think this is a good idea, and then attempt
> to address some of the concerns that have come up since.
>
> First off, to be frank, this is effort is driven by wanting to add support
> for loci images[1] in kolla-ansible. I think it would be unreasonable for
> anyone to argue this is a bad objective to have, loci images have very
> obvious benefits over what we have in Kolla today. I'm not looking to drop
> support for Kolla images at all, I simply want to continue decoupling things
> to the point where operators can pick and choose what works best for them.
> Stemming from this, I think moving these scripts out of the images provides
> a clear benefit to our consumers, both users of kolla and third parties such
> as triple-o. Let me explain why.

It's still very obscure to me how removing the scripts from kolla
images will benefit consumers. If the reason is that you want to
re-use them in other, non-kolla images, I believe we should package
the scripts. I've left some comments in your spec review.

> Normally, to run a docker image, a user will do 'docker run
> helloworld:latest'. In any non trivial application, config needs to be
> provided. In the vast majority of cases this is either provided via a bind
> mount (docker run -v hello.conf:/etc/hello.conf helloworld:latest), or via
> environment variables (docker run --env HELLO=paul helloworld:latest). This
> is all bog standard stuff, something anyone who's spent an hour learning
> docker can understand.
>
> Now, lets say someone wants to try out OpenStack with Docker, and they look
> at Kolla. First off they have to look at something called set_configs.py[2]
> - over 400 lines of Python. Next they need to understand what that script
> consumes, config.json [3]. The only reference for config.json is the files
> that live in kolla-ansible, a mass of jinja and assumptions about how the
> service will be run. Next, they need to figure out how to bind mount the
> config files and config.json into the container in a way that can be
> consumed by set_configs.py (which by the way, requires the base kolla image
> in all cases). This is only for the config. For the service start up
> command, this need to also be provided in config.json. This command is then
> parsed out and written to a location in the image, which is consumed by a
> series of start/extend start shell scripts. Kolla is *unique* in this
> regard, no other project in the container world is interfacing with images
> in this way. Being a snowflake in this regard is not a good thing. I'm still
> waiting to hear from a real world operator who would prefer to spend time
> learning the above to doing:

You're pointing a very real documentation issue. I've mentioned in the
other kolla thread that I have a stub for the kolla API documentation.
I'll push a patch for what I have and we can iterate on that.

>   docker run -v /etc/keystone:/etc/keystone keystone:latest --entrypoint
> /usr/bin/keystone [args]
>
> This is the Docker API, it's easy to understand and pretty much the standard
> at this point.

Sure, using the docker API works for simpler cases, not too
surprisingly once you start doing more funky things with your
containers you're quickly reach the docker API limitations. That's
when the kolla API comes in handy.
See for example this recent patch
https://review.openstack.org/#/c/556673/ where we needed to change
some file permission to the uid/gid of the user inside the container.

The first iteration basically used the docker API and started an
additional container to fix the permissions:

  docker run -v
/etc/pki/tls/certs/neutron.crt:/etc/pki/tls/certs/neutron.crt:rw \
-v /etc/pki/tls/private/neutron.key:/etc/pki/tls/private/neutron.key:rw
\
neutron_image \
/bin/bash -c 'chown neutron:neutron
/etc/pki/tls/certs/neutron.crt; chown neutron:neutron
/etc/pki/tls/private/neutron.key'

You'll agree this is not the most obvious. And it had a nasty side
effect that is changes the permissions of the files _on the host_.
While using kolla API we could simply add to our config.json:

  - path: /etc/pki/tls/certs/neutron.crt
owner: neutron:neutron
  - path: /etc/pki/tls/private/neutron.key
owner: neutron:neutron

> The other argument is that this removes the possibility for immutable
> infrastructure. The concern is, with the new approach, a rookie operator
> will modify one of the start scripts - resulting in uncertainty that what
> was first deployed matches what is currently running. But with the 

Re: [openstack-dev] [all][requirements] uncapping eventlet

2018-04-05 Thread Matthew Thode
On 18-04-05 20:11:04, Graham Hayes wrote:
> On 05/04/18 16:47, Matthew Thode wrote:
> > eventlet-0.22.1 has been out for a while now, we should try and use it.
> > Going to be fun times.
> > 
> > I have a review projects can depend upon if they wish to test.
> > https://review.openstack.org/533021
> 
> It looks like we may have an issue with oslo.service -
> https://review.openstack.org/#/c/559144/ is failing gates.
> 
> Also - what is the dance for this to get merged? It doesn't look like we
> can merge this while oslo.service has the old requirement restrictions.
> 

The dance is as follows.

0. provide review for projects to test new eventlet version
   projects using eventlet should make backwards compat code changes at
   this time.
1. uncap requirements for eventlet (do not raise upper constraint)
   step 0 does not have to be done for this to occur, but it'd be nice.
2. make sure all projects in projects.txt uncap eventlet (this is harder
   now that we have per-project requirements)
3. raise the constraint for eventlet, optionally also raise the global
   requirement for it as well

-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] uncapping eventlet

2018-04-05 Thread Graham Hayes
On 05/04/18 16:47, Matthew Thode wrote:
> eventlet-0.22.1 has been out for a while now, we should try and use it.
> Going to be fun times.
> 
> I have a review projects can depend upon if they wish to test.
> https://review.openstack.org/533021

It looks like we may have an issue with oslo.service -
https://review.openstack.org/#/c/559144/ is failing gates.

Also - what is the dance for this to get merged? It doesn't look like we
can merge this while oslo.service has the old requirement restrictions.

> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci] use of tags in launchpad bugs

2018-04-05 Thread Alex Schultz
On Thu, Apr 5, 2018 at 12:55 PM, Wesley Hayutin  wrote:
> FYI...
>
> This is news to me so thanks to Emilien for pointing it out [1].
> There are official tags for tripleo launchpad bugs.  Personally, I like what
> I've seen recently with some extra tags as they could be helpful in finding
> the history of particular issues.
> So hypothetically would it be "wrong" to create an official tag for each
> featureset config number upstream.  I ask because that is adding a lot of
> tags but also serves as a good test case for what is good/bad use of tags.
>

We list official tags over in the specs repo[0].   That being said as
we investigate switching over to storyboard, we'll probably want to
revisit tags as they will have to be used more to replace some of the
functionality we had with launchpad (e.g. milestones).  You could
always add the tags without being an official tag. I'm not sure I
would really want all the featuresets as tags.  I'd rather see us
actually figure out what component is actually failing than relying on
a featureset (and the Rosetta stone for decoding featuresets to
functionality[1]).


Thanks,
-Alex


[0] 
http://git.openstack.org/cgit/openstack/tripleo-specs/tree/specs/policy/bug-tagging.rst#n30
[1] 
https://git.openstack.org/cgit/openstack/tripleo-quickstart/tree/doc/source/feature-configuration.rst#n21
> Thanks
>
> [1] https://bugs.launchpad.net/tripleo/+manage-official-tags
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][ci] use of tags in launchpad bugs

2018-04-05 Thread Wesley Hayutin
FYI...

This is news to me so thanks to Emilien for pointing it out [1].
There are official tags for tripleo launchpad bugs.  Personally, I like
what I've seen recently with some extra tags as they could be helpful in
finding the history of particular issues.
So hypothetically would it be "wrong" to create an official tag for each
featureset config number upstream.  I ask because that is adding a lot of
tags but also serves as a good test case for what is good/bad use of tags.

Thanks

[1] https://bugs.launchpad.net/tripleo/+manage-official-tags
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-sig/news

2018-04-05 Thread Michael McCune
Greetings OpenStack community,

Today's meeting was quite lively with a good discussion about versions
and microversions across OpenStack and their usage within the API
schema world. We began with a review of outstanding work: elmiko is
continuing to work on an update to the microversion history doc[7],
and edleafe has reached out[8] to the SDK community to gauge interest
in a session for the upcoming Vancouver forum. dtanstur has also also
made an update[9] to the HTTP guideline layout that is currently under
review. The change was already largely approved; this just improves
the appearance of the refactored guidelines.

The API-SIG has not confirmed any sessions for the Vancouver forum
yet, but we continue to reach out[8] and would ideally like to host a
session including the API, SDK and user community groups. The topics
and schedule for this session will be highly influenced by input from
the larger OpenStack community. If you are interested in seeing this
event happen, please add your thoughts to the mailing sent out by
edleafe[8].

The next chunk of the meeting was spent discussing the OpenAPI
proposal[10] that elmiko has created. The discussion went well and
several new ideas were exposed. Additionally, a deep dive into
version/microversion usage across the OpenStack ecosystem was exposed
with several points being raised about how microversions are used and
how they are perceived by end users. There is no firm output from this
discussion yet, but elmiko is going to contact interested parties and
continue to update the proposal.

mordred informed the SIG that he has started working on
discover/version things in keystoneauth and should be returning to the
related specs within the next few days. and there was much rejoicing.
\o/

On the topic of reviews, the SIG has identified one[11] that is ready
for freeze this week.

Lastly, the SIG reviewed a newly opened bug[12] asking to add a
"severity" field to the error structure. After a short discussion, the
group agreed that this was not something that should be accepted and
have marked it as "won't fix". For more details please see the
comments on the bug review.

As always if you're interested in helping out, in addition to coming
to the meetings, there's also:

* The list of bugs [5] indicates several missing or incomplete guidelines.
* The existing guidelines [2] always need refreshing to account for
changes over time. If you find something that's not quite right,
submit a patch [6] to fix it.
* Have you done something for which you think guidance would have made
things easier but couldn't find any? Submit a patch and help others
[6].

# Newly Published Guidelines

* Add guideline on exposing microversions in SDKs
  https://review.openstack.org/#/c/532814

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

* Update the errors guidance to use service-type for code
  https://review.openstack.org/#/c/554921/

# Guidelines Currently Under Review [3]

* Break up the HTTP guideline into smaller documents
  https://review.openstack.org/#/c/554234/

* Add guidance on needing cache-control headers
  https://review.openstack.org/550468

* Update parameter names in microversion sdk spec
  https://review.openstack.org/#/c/557773/

* Add API-schema guide (still being defined)
  https://review.openstack.org/#/c/524467/

* A (shrinking) suite of several documents about doing version and
service discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet
ready for review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API SIG about APIs
that you are developing or changing, please address your concerns in
an email to the OpenStack developer mailing list[1] with the tag
"[api]" in the subject. In your email, you should include any relevant
reviews, links, and comments to help guide the discussion of the
specific challenge you are facing.

To learn more about the API SIG mission and the work we do, see our
wiki page [4] and guidelines [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] https://wiki.openstack.org/wiki/API_SIG
[5] https://bugs.launchpad.net/openstack-api-wg
[6] https://git.openstack.org/cgit/openstack/api-wg
[7] https://review.openstack.org/444892
[8] http://lists.openstack.org/pipermail/openstack-sigs/2018-March/000353.html
[9] https://review.openstack.org/#/c/554234/
[10] https://gist.github.com/elmiko/7d97fef591887aa0c594c3dafad83442
[11] https://review.openstack.org/#/c/554921/
[12] https://bugs.launchpad.net/openstack-api-wg/+bug/1761475

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda
Past Meeting Records

[openstack-dev] [ironic] Bug Day April 12th poll (was originally April 6th)

2018-04-05 Thread Michael Turek

Hey everyone,

At this week's ironic IRC meeting we decided to push the bug day to 
April 12th. I updated the poll name to indicate this and it 
unfortunately wiped the results of the poll.


If you can recast your vote here it would be appreciated 
https://doodle.com/poll/xa999rx653pb58t6


It's looking like a 2 hour window would be the right length, but if you 
have any opinions on that please respond here.


Thanks!
Mike Turek 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [k8s] OpenStack and Containers White Paper

2018-04-05 Thread Mikhail Fedosin
Hello!
I'm working on Keystone authentication and authorization and other related
parts of openstack cloud provider. I will be happy to help you!

Best,
Mike

On Tue, Apr 3, 2018 at 8:38 AM, Jaesuk Ahn  wrote:

> Hi Chris,
>
> I can probably help on proof-reading and making some contents on the
> openstack-helm part.
> As Pete pointed out, LOCI and OpenStack-Helm (OSH) are agnostic to each
> other. OSH is working well with both kolla image and loci image.
>
> IMHO, following categorization might be better to capture the nature of
> these project. Just suggestion.
>
> * OpenStack Containerization tools
>* Kolla
>* Loci
> * Container-based deployment tools for installing and managing OpenStack
>* Kolla-Ansible
>* OpenStack Helm
>
>
> On Tue, Apr 3, 2018 at 10:08 AM Pete Birley  wrote:
>
>> Chris,
>>
>> I'd be happy to help out where I can, mostly related to OSH and LOCI. One
>> thing we should make clear is that both of these projects are agnostic to
>> each other: we gate OSH with both LOCI and kolla images, and conversely
>> LOCI has uses far beyond just OSH.
>>
>> Pete
>>
>> On Monday, April 2, 2018, Chris Hoge  wrote:
>>
>>> Hi everyone,
>>>
>>> In advance of the Vancouver Summit, I'm leading an effort to publish a
>>> community produced white-paper on OpenStack and container integrations.
>>> This has come out of a need to develop materials, both short and long
>>> form, to help explain how OpenStack interacts with container
>>> technologies across the entire stack, from infrastructure to
>>> application. The rough outline of the white-paper proposes an entire
>>> technology stack and discuss deployment and usage strategies at every
>>> level. The white-paper will focus on existing technologies, and how they
>>> are being used in production today across our community. Beginning at
>>> the hardware layer, we have the following outline (which may be inverted
>>> for clarity):
>>>
>>> * OpenStack Ironic for managing bare metal deployments.
>>> * Container-based deployment tools for installing and managing OpenStack
>>>* Kolla containers and Kolla-Ansible
>>>* Loci containers and OpenStack Helm
>>> * OpenStack-hosted APIs for managing container application
>>>   infrastructure.
>>>* Magnum
>>>* Zun
>>> * Community-driven integration of Kubernetes and OpenStack with K8s
>>>   Cloud Provider OpenStack
>>> * Projects that can stand alone in integrations with Kubernetes and
>>>   other cloud technology
>>>* Cinder
>>>* Neutron with Kuryr and Calico integrations
>>>* Keystone authentication and authorization
>>>
>>> I'm looking for volunteers to help produce the content for these sections
>>> (and any others we may uncover to be useful) for presenting a complete
>>> picture of OpenStack and container integrations. If you're involved with
>>> one of these projects, or are using any of these tools in
>>> production, it would be fantastic to get your input in producing the
>>> appropriate section. We especially want real-world deployments to use as
>>> small case studies to inform the work.
>>>
>>> During the process of creating the white-paper, we will be working with a
>>> technical writer and the Foundation design team to produce a document
>>> that
>>> is consistent in voice, has accurate and informative graphics that
>>> can be used to illustrate the major points and themes of the white-paper,
>>> and that can be used as stand-alone media for conferences and
>>> presentations.
>>>
>>> Over the next week, I'll be reaching out to individuals and inviting them
>>> to collaborate. This is also a general invitation to collaborate, and if
>>> you'd like to help out with a section please reach out to me here, on the
>>> K8s #sig-openstack Slack channel, or at my work e-mail,
>>> ch...@openstack.org.
>>> Starting next week, we'll work out a schedule for producing and
>>> delivering
>>> the white-paper by the Vancouver Summit. We are very short on time, so
>>> we will have to be focused to quickly produce high-quality content.
>>>
>>> Thanks in advance to everyone who participates in writing this
>>> document. I'm looking forward to working with you in the coming weeks to
>>> publish this important resource for clearly describing the multitude of
>>> interactions between these complementary technologies.
>>>
>>> -Chris Hoge
>>> K8s-SIG-OpenStack/OpenStack-SIG-K8s Co-Lead
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>>> unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>> --
>>
>> [image: Port.direct] 
>>
>> Pete Birley / Director
>> pete@port.direct / +447446862551 <+44%207446%20862551>
>>
>> *PORT.*DIRECT
>> United Kingdom
>> https://port.direct
>>
>> This e-mail message may 

Re: [openstack-dev] Fwd: [tripleo] PTG session about All-In-One installer: recap & roadmap

2018-04-05 Thread Dan Prince
On Thu, Apr 5, 2018 at 12:24 PM, Emilien Macchi  wrote:
> On Thu, Apr 5, 2018 at 4:37 AM, Dan Prince  wrote:
>
>> Much of the work on this is already there. We've been using this stuff
>> for over a year to dev/test the containerization efforts for a long
>> time now (and thanks for your help with this effort). The problem I
>> think is how it is all packaged. While you can use it today it
>> involves some tricks (docker in docker), or requires you to use an
>> extra VM to minimize the installation footprint on your laptop.
>>
>> Much of the remaining work here is really just about packaging and
>> technical debt. If we put tripleoclient and heat-monolith into a
>> container that solves much of the requirements problems and
>> essentially gives you a container which can transform Heat templates
>> to Ansible. From the ansible side we need to do a bit more work to
>> mimimize our dependencies (i.e. heat hooks). Using a virtual-env would
>> be one option for developers if we could make that work. I lighter set
>> of RPM packages would be another way to do it. Perhaps both...
>> Then a smaller wrapper around these things (which I personally would
>> like to name) to make it all really tight.
>
>
> So if I summarize the discussion:
>
> - A lot of positive feedback about the idea and many use cases, which is
> great.
>
> - Support for non-containerized services is not required, as long as we
> provide a way to update containers with under-review patches for developers.

I think we still desire some (basic no upgrades) support for
non-containerized baremetal at this time.

>
> - We'll probably want to breakdown the "openstack undercloud deploy" process
> into pieces
> * start an ephemeral Heat container

It already supports this if use don't use the --heat-native option,
also you can customize the container used for heat via
--heat-container-image. So we already have this! But rather than do
this I personally prefer the container to have python-tripleoclient
and heat-monolith in it. That way everything everything is in there to
generate Ansible templates. If you just use Heat you are missing some
of the pieces that you'd still have to install elsewhere on your host.
Having them all be in one scoped container which generates Ansible
playbooks from Heat templates is better IMO.

> * create the Heat stack passing all requested -e's
> * run config-download and save the output
>
> And then remove undercloud specific logic, so we can provide a generic way
> to create the config-download playbooks.

Yes. Lets remove some of the undercloud logic. But do note that most
of the undercloud specific login is now in undercloud_config.py anyway
so this is mostly already on its way.

> This generic way would be consumed by the undercloud deploy commands but
> also by the new all-in-one wrapper.
>
> - Speaking of the wrapper, we will probably have a new one. Several names
> were proposed:
> * openstack tripleo deploy
> * openstack talon deploy
> * openstack elf deploy

The wrapper could be just another set of playbooks. That we give a
name too... and perhaps put a CLI in front of as well.

>
> - The wrapper would work with deployed-server, so we would noop Neutron
> networks and use fixed IPs.

This would be configurable I think depending on which templates were
used. Noop as a default for developer deployments but do note that
some services like Neutron aren't going to work unless you have some
basic network setup. Noop is useful if you prefer to do this manually,
but our os-net-config templates are quite useful to automate things.

>
> - Investigate the packaging work: containerize tripleoclient and
> dependencies, see how we can containerized Ansible + dependencies (and
> eventually reduce them at strict minimum).
>
> Let me know if I missed something important, hopefully we can move things
> forward during this cycle.
> --
> Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [tripleo] PTG session about All-In-One installer: recap & roadmap

2018-04-05 Thread Wesley Hayutin
On Thu, 5 Apr 2018 at 12:25 Emilien Macchi  wrote:

> On Thu, Apr 5, 2018 at 4:37 AM, Dan Prince  wrote:
>
> Much of the work on this is already there. We've been using this stuff
>> for over a year to dev/test the containerization efforts for a long
>> time now (and thanks for your help with this effort). The problem I
>> think is how it is all packaged. While you can use it today it
>> involves some tricks (docker in docker), or requires you to use an
>> extra VM to minimize the installation footprint on your laptop.
>>
>> Much of the remaining work here is really just about packaging and
>> technical debt. If we put tripleoclient and heat-monolith into a
>> container that solves much of the requirements problems and
>> essentially gives you a container which can transform Heat templates
>> to Ansible. From the ansible side we need to do a bit more work to
>> mimimize our dependencies (i.e. heat hooks). Using a virtual-env would
>> be one option for developers if we could make that work. I lighter set
>> of RPM packages would be another way to do it. Perhaps both...
>> Then a smaller wrapper around these things (which I personally would
>> like to name) to make it all really tight.
>
>
> So if I summarize the discussion:
>
> - A lot of positive feedback about the idea and many use cases, which is
> great.
>
> - Support for non-containerized services is not required, as long as we
> provide a way to update containers with under-review patches for developers.
>

Hrm..  I was just speaking to Alfredo about this.  We may need to have a
better understanding of the various ecosystems where TripleO is in play
here to have a fully informed decision.
By ecosystem I'm referring to RDO, centos, and upstream and the containers
used in deployments.  I suspect a non-containerized deployment may still be
required, but looking for the packaging team to weigh in.


>
> - We'll probably want to breakdown the "openstack undercloud deploy"
> process into pieces
> * start an ephemeral Heat container
> * create the Heat stack passing all requested -e's
> * run config-download and save the output
>
> And then remove undercloud specific logic, so we can provide a generic
> way to create the config-download playbooks.
> This generic way would be consumed by the undercloud deploy commands but
> also by the new all-in-one wrapper.
>
> - Speaking of the wrapper, we will probably have a new one. Several names
> were proposed:
> * openstack tripleo deploy
> * openstack talon deploy
> * openstack elf deploy
>
> - The wrapper would work with deployed-server, so we would noop Neutron
> networks and use fixed IPs.
>
> - Investigate the packaging work: containerize tripleoclient and
> dependencies, see how we can containerized Ansible + dependencies (and
> eventually reduce them at strict minimum).
>
> Let me know if I missed something important, hopefully we can move things
> forward during this cycle.
> --
> Emilien Macchi
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-20, April 9-13

2018-04-05 Thread Sean McGinnis
Welcome to our regular release countdown email.

Development Focus
-

Team focus should be on spec approval and implementation for priority features.

Please be aware of the project specific deadlines that vary slightly from the
overall release schedule [1].

Teams should now be making progress towards the cycle goals [2]. Please
prioritize reviews for these appropriately. 

[1] https://releases.openstack.org/rocky/schedule.html
[2] https://governance.openstack.org/tc/goals/rocky/index.html

General Information
---

We are already coming up on the first Rocky milestone on Thursday, April 19.

This is the last week for projects to switch release models if they are
considering it. Stop by the #openstack-release channel if you have any
questions about how this works.

Another quick reminder - if your project has a library that is still a 0.x
release, start thinking about when it will be appropriate to do a 1.0 version.
The version number does signal the state, real or perceived, of the library, so
we strongly encourage going to a full major version once things are in a good
and usable state.

Finally, we would love to have all the liaisons attend the release team meeting
every Friday [3]. Anyone is welcome.

[3] http://eavesdrop.openstack.org/#Release_Team_Meeting

Upcoming Deadlines & Dates
--

Rocky-1 milestone: April 19 (R-19 week)
Forum at OpenStack Summit in Vancouver: May 21-24

-- 
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [tripleo] PTG session about All-In-One installer: recap & roadmap

2018-04-05 Thread Emilien Macchi
On Thu, Apr 5, 2018 at 4:37 AM, Dan Prince  wrote:

Much of the work on this is already there. We've been using this stuff
> for over a year to dev/test the containerization efforts for a long
> time now (and thanks for your help with this effort). The problem I
> think is how it is all packaged. While you can use it today it
> involves some tricks (docker in docker), or requires you to use an
> extra VM to minimize the installation footprint on your laptop.
>
> Much of the remaining work here is really just about packaging and
> technical debt. If we put tripleoclient and heat-monolith into a
> container that solves much of the requirements problems and
> essentially gives you a container which can transform Heat templates
> to Ansible. From the ansible side we need to do a bit more work to
> mimimize our dependencies (i.e. heat hooks). Using a virtual-env would
> be one option for developers if we could make that work. I lighter set
> of RPM packages would be another way to do it. Perhaps both...
> Then a smaller wrapper around these things (which I personally would
> like to name) to make it all really tight.


So if I summarize the discussion:

- A lot of positive feedback about the idea and many use cases, which is
great.

- Support for non-containerized services is not required, as long as we
provide a way to update containers with under-review patches for developers.

- We'll probably want to breakdown the "openstack undercloud deploy"
process into pieces
* start an ephemeral Heat container
* create the Heat stack passing all requested -e's
* run config-download and save the output

And then remove undercloud specific logic, so we can provide a generic way
to create the config-download playbooks.
This generic way would be consumed by the undercloud deploy commands but
also by the new all-in-one wrapper.

- Speaking of the wrapper, we will probably have a new one. Several names
were proposed:
* openstack tripleo deploy
* openstack talon deploy
* openstack elf deploy

- The wrapper would work with deployed-server, so we would noop Neutron
networks and use fixed IPs.

- Investigate the packaging work: containerize tripleoclient and
dependencies, see how we can containerized Ansible + dependencies (and
eventually reduce them at strict minimum).

Let me know if I missed something important, hopefully we can move things
forward during this cycle.
-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][requirements] uncapping eventlet

2018-04-05 Thread Matthew Thode
eventlet-0.22.1 has been out for a while now, we should try and use it.
Going to be fun times.

I have a review projects can depend upon if they wish to test.
https://review.openstack.org/533021
-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][tc][openstack-helm][tripleo]propose retire kolla-kubernetes project

2018-04-05 Thread Brandon Jozsa



On April 4, 2018 at 4:21:58 PM, Michał Jastrzębski 
(inc...@gmail.com) wrote:

On 4 April 2018 at 14:45, Brandon Jozsa  wrote:
> I’ve been a part of the OpenStack-Helm project from the very beginning, and
> there was a lot of early brainstorming on how we could collaborate and
> contribute directly to Kolla-Kubernetes. In fact, this was the original
> intent when we met with Kolla back in Barcelona. We didn’t like the idea of
> fragmenting interested Kubernetes developers/operators in the
> OpenStack-via-Kubernetes space. Whatever the project, we wanted all the
> domain expertise concentrated on a single deployment effort. Even though
> OSH/K-k8s couldn’t reach an agreement on how to handle configmaps (our
> biggest difference from the start), there was a lot of early collaboration
> between the project cores. Early K-k8s contributors may remember Halcyon,
> which cores from both sides promoted for early development of
> OpenStack-via-Kubernetes, regardless of the project.
>
> One of the requests from the initial OSH team (in Barcelona) was to formally
> separate Kolla from Kolla-foo deployment projects, both at a project level
> and from a core perspective. Why have the same cores giving +2’s to Kolla,
> Kolla-Ansible, Kolla-Mesos (now dead) and Kolla-Kubernetes, who may not have
> any interest in another given discipline? We wanted reviews to be timely,
> and laser-focused, and we felt that this more atomic approach would benefit
> Kolla in the end. But unfortunately there was heavy resistance with limited
> yet very influential cores. I honestly think pushback was also because it
> would mean that any Kolla sub-projects would be subject to re-acceptance as
> big tent projects.

Limited, but very influential cores sounds like bad community, and as
it happens I was leading this community at that time, so I feel I
should comment. We would love to increase number of cores (raise a
limit) of images, but that comes with a cost. Cost being that person
who would like to become a core would need to contribute to project in
question and review other people contributions. Proper way to address
this problem would be just that - contributing to Kolla and reviewing
code. If I failed to notice contributions from someone who did that a
lot (I hope I didn't), I'm sorry. This is best and only way to solve
problem in question.


I think you did the best you could, Michal. As I understand it there are 
essentially three active projects under Kolla today; Kolla, Kolla-Ansible, 
Kolla-Kubernetes (and others that have been abandoned or dead), and only Kolla 
shows up under the project navigator. I assume this means they are all still 
under one project umbrella? I think this is a bit of a stretch for a 
single-project core team, especially since there are fundamental differences 
between Ansible and Kubernetes.

So my comment was far less about you or anyone personally as a PTL or core, but 
really more about the “laser-focused” discipline of the group as a whole. Kolla 
is the only project I am aware of that has this catch all mission allowing it 
to be any type of deployment that consumes Kolla as a base, and it leverages 
the same resources. In fact, any other container-based OpenStack projects have 
been met with a bit of a strange resistance. See: 
https://review.openstack.org/#/c/449224/


>
> There were also countless discussions about the preservation of the Kolla
> API, or Ansible + Jinja portions of Kolla-Ansible. It became clear to us
> that Kubernetes wasn’t going to be the first class citizen for the
> deployment model in Kolla-Kubernetes, forcing operators to troubleshoot
> between OpenStack, Kolla (container builds), Ansible, Kubernetes, and Helm.
> This is apparent still today. And while I understand the hesitation to
> change Kolla/Kolla-Ansible, I think this code-debt has somewhat contributed
> to sustainability of Kolla-Kubernetes. Somewhat to the point of tension, I
> very much agree with Thierry’s comments earlier.

How k8s wasn't first class citizen? I don't understand. All processes
were the same, time in PTG was generous compared to ansible etc. More
people uses Ansible due to it's maturity so it's obvious it's going to
have better testing etc, but again, solved by contributions.

I’m not talking about at a project level. I think Kolla has done a wonderful 
job in that respect. All of my comments above are about the mixed-use of 
technologies leveraged to drive the project. Compare how Kubernetes configmaps 
are generated for each of the projects. Then ask yourself what drove that 
design? Was it simplicity or adherence to previous debt/models?

Technical details like these need to be called out (good/bad/indifferent), and 
planned for accordingly in the event that the projects do merge at some point. 
I think this is the most confusing part for new users, because both communities 
have been asked countless times “what are the differences between x and 

Re: [openstack-dev] [TripleO][ci][ceph] switching to config-download by default

2018-04-05 Thread James Slagle
On Thu, Apr 5, 2018 at 10:38 AM, James Slagle  wrote:
> I've pushed up for review a set of patches to switch us over to using
> config-download by default:
>
> https://review.openstack.org/#/q/topic:bp/config-download-default
>
> I believe I've come up with the proper series of steps to switch
> things over. Let me know if you have any feedback or foresee any
> issues:
>
> FIrst, we update remaining multinode jobs
> (https://review.openstack.org/558965) and ovb jobs
> (https://review.openstack.org/559067) that run against master to
> opt-in to config-download. This will expose any issues with these jobs
> and config-download and let us fix those issues.
>
> We can then switch tripleoclient (https://review.openstack.org/558925)
> over to use config-download by default. Since this also requires a
> Heat environment, we must forcibly inject that environment via
> tripleoclient.
>
> Once the tripleoclient patch lands, we can update
> tripleo-heat-templates to use the mappings from config-download in the
> default resource registry (https://review.openstack.org/558927).

I forgot to mention that at this point the UI would have to be working
with config-download before we land that tripleo-heat-templates patch.
Or, the UI could opt-in to the
disable-config-download-environment.yaml that I'm providing with that
patch.


-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][ci][ceph] switching to config-download by default

2018-04-05 Thread James Slagle
I've pushed up for review a set of patches to switch us over to using
config-download by default:

https://review.openstack.org/#/q/topic:bp/config-download-default

I believe I've come up with the proper series of steps to switch
things over. Let me know if you have any feedback or foresee any
issues:

FIrst, we update remaining multinode jobs
(https://review.openstack.org/558965) and ovb jobs
(https://review.openstack.org/559067) that run against master to
opt-in to config-download. This will expose any issues with these jobs
and config-download and let us fix those issues.

We can then switch tripleoclient (https://review.openstack.org/558925)
over to use config-download by default. Since this also requires a
Heat environment, we must forcibly inject that environment via
tripleoclient.

Once the tripleoclient patch lands, we can update
tripleo-heat-templates to use the mappings from config-download in the
default resource registry (https://review.openstack.org/558927).

We can then remove the forcibly injected environment from
tripleoclient (https://review.openstack.org/558931)

Finally, we can go back and update the multinode/ovb jobs on master to
not be opt-in for config-download since it would now be the default
(no patch yet).

Now...for Ceph it will be slightly different:

We have a patch that migrates from workflow_tasks to
external_deploy_tasks (https://review.openstack.org/#/c/546966/) and
that depends on a quickstart patch to update the Ceph scenarios to use
config-download (https://review.openstack.org/#/c/548306/). These
patches are co-dependencies and present a problem in that they both
must land at the same time.

To workaround that, I think we need to update the
tripleo-heat-templates patch to include both the existing
workflow_tasks *and* the new external_deploy_tasks. Once we've proven
the external_deploy_tasks work, we remove the depends-on and land the
tripleo-heat-templates patch. It will pass the existing Ceph scenario
jobs b/c they will be using workflow_tasks.

We then land the quickstart patch to switch those scenario jobs to use
external_deploy_tasks. Then we can circle back and remove
workflow_tasks from the ceph templates in tripleo-heat-templates.

I think this will allow everything to land and keep CI green along the
way. Please let me know any feedback as we plan to try and push on
this work over the next couple of weeks.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Asking for ask.openstack.org

2018-04-05 Thread Zane Bitter

On 05/04/18 00:12, Ian Wienand wrote:

On 04/05/2018 10:23 AM, Zane Bitter wrote:

On 04/04/18 17:26, Jimmy McArthur wrote:
Here's the thing: email alerts. They're broken.


This is the type of thing we can fix if we know about it ... I will
contact you off-list because the last email to what I presume is you
went to an address that isn't what you've sent from here, but it was
accepted by the remote end.


Yeah, my mails get proxied through a fedora project address. I am 
getting them now though (since the SW update in January 2017 - and even 
before that I did get notifications if somebody @'d me). The issue is 
the content is not filtered by subscribed tags according to the 
preferences I have set, so they're useless for keeping up with my areas 
of interest.


It's not just a mail delivery problem, and I guarantee it's not just me. 
It's a bug somewhere in StackExchange itself.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Asking for ask.openstack.org

2018-04-05 Thread Jimmy McArthur

Ian, thanks for digging in and helping sort out some of these issues!


Ian Wienand 
April 4, 2018 at 11:04 PM

We've long had problems with this host and I've looked at it before
[1]. It often drops out.

It seems there's enough interest we should dive a bit deeper. Here's
what I've found out:

askbot
--

Of the askbot site, it seems under control, except for an unbounded
session log file. Proposed [2]

root@ask:/srv# du -hs *
2.0G askbot-site
579M dist

overall
---

The major consumer is /var; where we've got

3.9G log
5.9G backups
9.4G lib

backups
---

The backup seem under control at least; we're rotating them out and we
keep 10, and the size is pretty consistently 500mb:

root@ask:/var/backups/pgsql_backups# ls -lh
total 5.9G
-rw-r--r-- 1 root root 599M Apr 5 00:03 askbotdb.sql.gz
-rw-r--r-- 1 root root 598M Apr 4 00:03 askbotdb.sql.gz.1
...

We could reduce the backup rotations to just one if we like -- the
server is backed up nightly via bup, so at any point we can get
previous dumps from there. bup should de-duplicate everything, but
still, it's probably not necessary.

The db directory was sitting at ~9gb

root@ask:/var/lib/postgresql# du -hs
8.9G .

AFAICT, it seems like the autovacuum is running OK on the busy tables

askbotdb=# select relname,last_vacuum, last_autovacuum, last_analyze, 
last_autoanalyze from pg_stat_user_tables where last_autovacuum is not 
NULL;

relname | last_vacuum | last_autovacuum | last_analyze | last_autoanalyze
--+-+---+---+---
django_session | | 2018-04-02 17:29:48.329915+00 | 2018-04-05 
02:18:39.300126+00 | 2018-04-05 00:11:23.456602+00
askbot_badgedata | | 2018-04-04 07:19:21.357461+00 | | 2018-04-04 
07:18:16.201376+00
askbot_thread | | 2018-04-04 16:24:45.124492+00 | | 2018-04-04 
20:32:25.845164+00
auth_message | | 2018-04-04 12:29:24.273651+00 | 2018-04-05 
02:18:07.633781+00 | 2018-04-04 21:26:38.178586+00
djkombu_message | | 2018-04-05 02:11:50.186631+00 | | 2018-04-05 
02:14:45.22926+00


Out of interest I did run a manual

su - postgres -c "vacuumdb --all --full --analyze"

We dropped something

root@ask:/var/lib/postgresql# du -hs
8.9G .
(after)
5.8G

I installed pg_activity and watched for a while; nothing seemed to be
really stressing it.

Ergo, I'm not sure if there's much to do in the db layers.

logs


This leaves the logs

1.1G jetty
2.9G apache2

The jetty logs are cleaned regularly. I think they could be made more
quiet, but they seem to be bounded.

Apache logs are rotated but never cleaned up. Surely logs from 2015
aren't useful. Proposed [3]

Random offline
--

[3] is an example of a user reporting the site was offline. Looking
at the logs, it seems that puppet found httpd not running at 07:14 and
restarted it:

Apr 4 07:14:40 ask puppet-user[20737]: 
(Scope(Class[Postgresql::Server])) Passing "version" to 
postgresql::server is deprecated; please use postgresql::globals instead.
Apr 4 07:14:42 ask puppet-user[20737]: Compiled catalog for 
ask.openstack.org in environment production in 4.59 seconds

Apr 4 07:14:44 ask crontab[20987]: (root) LIST (root)
Apr 4 07:14:49 ask puppet-user[20737]: 
(/Stage[main]/Httpd/Service[httpd]/ensure) ensure changed 'stopped' to 
'running'
Apr 4 07:14:54 ask puppet-user[20737]: Finished catalog run in 10.43 
seconds


Which first explains why when I looked, it seemed OK. Checking the
apache logs we have:

[Wed Apr 04 07:01:08.144746 2018] [:error] [pid 12491:tid 
140439253419776] [remote 176.233.126.142:43414] mod_wsgi (pid=12491): 
Exception occurred processing WSGI script 
'/srv/askbot-site/config/django.wsgi'.
[Wed Apr 04 07:01:08.144870 2018] [:error] [pid 12491:tid 
140439253419776] [remote 176.233.126.142:43414] IOError: failed to 
write data

... more until ...
[Wed Apr 04 07:15:58.270180 2018] [:error] [pid 17060:tid 
140439253419776] [remote 176.233.126.142:43414] mod_wsgi (pid=17060): 
Exception occurred processing WSGI script 
'/srv/askbot-site/config/django.wsgi'.
[Wed Apr 04 07:15:58.270303 2018] [:error] [pid 17060:tid 
140439253419776] [remote 176.233.126.142:43414] IOError: failed to 
write data


and the restart logged

[Wed Apr 04 07:14:48.912626 2018] [core:warn] [pid 21247:tid 
140439370192768] AH00098: pid file /var/run/apache2/apache2.pid 
overwritten -- Unclean shutdown of previous Apache run?
[Wed Apr 04 07:14:48.913548 2018] [mpm_event:notice] [pid 21247:tid 
140439370192768] AH00489: Apache/2.4.7 (Ubuntu) OpenSSL/1.0.1f 
mod_wsgi/3.4 Python/2.7.6 configured -- resuming normal operations
[Wed Apr 04 07:14:48.913583 2018] [core:notice] [pid 21247:tid 
140439370192768] AH00094: Command line: '/usr/sbin/apache2'
[Wed Apr 04 14:59:55.408060 2018] [mpm_event:error] [pid 21247:tid 
140439370192768] AH00485: scoreboard is full, not at MaxRequestWorkers


This does not appear to be disk-space related; see the cacti graphs
for 

Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-04-05 Thread Matt Riedemann

On 4/5/2018 3:15 AM, Gorka Eguileor wrote:

But just to be clear, Nova will have to initialize the connection with
the re-imagined volume and attach it again to the node, as in all cases
(except when defaulting to downloading the image and dd-ing it to the
volume) the result will be a new volume in the backend.


Yeah I think I pointed this out earlier in this thread on what I thought 
the steps would be on the nova side with respect to creating a new empty 
attachment to keep the volume 'reserved' while we delete the old 
attachment, re-image the volume, and then update the volume attachment 
for the new connection. I think that would be similar to how shelve and 
unshelve works in nova.


Would this really require a swap volume call from Cinder? I'd hope not 
since swap volume in itself is a pretty gross operation on the nova side.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Tripleo] ipmitool issues HP machines

2018-04-05 Thread Dan Prince
Sigh.

And the answer is: user error. Adminstrator != Administrator.

Well this was fun. Sorry for the bother. All is well. :)

Dan

On Thu, Apr 5, 2018 at 8:13 AM, Dan Prince  wrote:
> On Wed, Apr 4, 2018 at 1:27 PM, Jim Rollenhagen  
> wrote:
>> On Wed, Apr 4, 2018 at 1:18 PM, Jim Rollenhagen 
>> wrote:
>>>
>>> On Wed, Apr 4, 2018 at 8:39 AM, Dan Prince  wrote:

 Kind of a support question but figured I'd ask here in case there are
 suggestions for workarounds for specific machines.

 Setting up a new rack of mixed machines this week and hit this issue
 with HP machines using the ipmi power driver for Ironic. Curious if
 anyone else has seen this before? The same commands work great with my
 Dell boxes!

 -

 [root@localhost ~]# cat x.sh
 set -x
 # this is how Ironic sends its IPMI commands it fails
 echo -n password > /tmp/tmprmdOOv
 ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv
 power status

 # this works great
 ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power
 status

 [root@localhost ~]# bash x.sh
 + echo -n password
 + ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv
 power status
 Error: Unable to establish IPMI v2 / RMCP+ session
 + ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power
 status
 Chassis Power is on
>>>
>>>
>>> Very strange. A tcpdump of both would probably be enlightening. :)
>>>
>>> Also curious what version of ipmitool this is, maybe you're hitting an old
>>> bug.
>>
>>
>> https://sourceforge.net/p/ipmitool/bugs/90/ would seem like a prime suspect
>> here.
>
> Thanks for the suggestion Jim! So I tried a few very short passwords
> and no dice so far. Looking into the tcpdump info a bit now.
>
> I'm in a bit of a rush so I may hack in a quick patch Ironic to make
> ipmitool to use the -P option to proceed and loop back to fix this a
> bit later.
>
> Dan
>
>>
>> // jim
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] [tripleo] On moving start scripts out of Kolla images

2018-04-05 Thread Paul Bourke

Hi all,

This mail is to serve as a follow on to the discussion during 
yesterday's team meeting[4], which was regarding the desire to move 
start scripts out of the kolla images [0]. There's a few factors at 
play, and it may well be best left to discuss in person at the summit in 
May, but hopefully we can get at least some of this hashed out before then.


I'll start by summarising why I think this is a good idea, and then 
attempt to address some of the concerns that have come up since.


First off, to be frank, this is effort is driven by wanting to add 
support for loci images[1] in kolla-ansible. I think it would be 
unreasonable for anyone to argue this is a bad objective to have, loci 
images have very obvious benefits over what we have in Kolla today. I'm 
not looking to drop support for Kolla images at all, I simply want to 
continue decoupling things to the point where operators can pick and 
choose what works best for them. Stemming from this, I think moving 
these scripts out of the images provides a clear benefit to our 
consumers, both users of kolla and third parties such as triple-o. Let 
me explain why.


Normally, to run a docker image, a user will do 'docker run 
helloworld:latest'. In any non trivial application, config needs to be 
provided. In the vast majority of cases this is either provided via a 
bind mount (docker run -v hello.conf:/etc/hello.conf helloworld:latest), 
or via environment variables (docker run --env HELLO=paul 
helloworld:latest). This is all bog standard stuff, something anyone 
who's spent an hour learning docker can understand.


Now, lets say someone wants to try out OpenStack with Docker, and they 
look at Kolla. First off they have to look at something called 
set_configs.py[2] - over 400 lines of Python. Next they need to 
understand what that script consumes, config.json [3]. The only 
reference for config.json is the files that live in kolla-ansible, a 
mass of jinja and assumptions about how the service will be run. Next, 
they need to figure out how to bind mount the config files and 
config.json into the container in a way that can be consumed by 
set_configs.py (which by the way, requires the base kolla image in all 
cases). This is only for the config. For the service start up command, 
this need to also be provided in config.json. This command is then 
parsed out and written to a location in the image, which is consumed by 
a series of start/extend start shell scripts. Kolla is *unique* in this 
regard, no other project in the container world is interfacing with 
images in this way. Being a snowflake in this regard is not a good 
thing. I'm still waiting to hear from a real world operator who would 
prefer to spend time learning the above to doing:


  docker run -v /etc/keystone:/etc/keystone keystone:latest 
--entrypoint /usr/bin/keystone [args]


This is the Docker API, it's easy to understand and pretty much the 
standard at this point.


The other argument is that this removes the possibility for immutable 
infrastructure. The concern is, with the new approach, a rookie operator 
will modify one of the start scripts - resulting in uncertainty that 
what was first deployed matches what is currently running. But with the 
way Kolla is now, an operator can still do this! They can restart 
containers with a custom entrypoint or additional bind mounts, they can 
exec in and change config files, etc. etc. Kolla containers have never 
been immutable and we're bending over backwards to artificially try and 
make this the case. We cant protect a bad or inexperienced operator from 
shooting themselves in the foot, there are better ways of doing so. 
If/when Docker or the upstream container world solves this problem, it 
would then make sense for Kolla to follow suit.


On the face of it, what the spec proposes is a simple change, it should 
not radically pull the carpet out under people, or even change the way 
kolla-ansible works in the near term. If consumers such as tripleo or 
other parties feel it would in fact do so please do let me know and we 
can discuss and mitigate these problems.


Cheers,
-Paul

[0] https://review.openstack.org/#/c/550958/
[1] https://github.com/openstack/loci
[2] 
https://github.com/openstack/kolla/blob/master/docker/base/set_configs.py
[3] 
https://github.com/openstack/kolla-ansible/blob/master/ansible/roles/keystone/templates/keystone.json.j2
[4] 
http://eavesdrop.openstack.org/meetings/kolla/2018/kolla.2018-04-04-16.00.log.txt


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Tripleo] ipmitool issues HP machines

2018-04-05 Thread Dan Prince
On Wed, Apr 4, 2018 at 1:27 PM, Jim Rollenhagen  wrote:
> On Wed, Apr 4, 2018 at 1:18 PM, Jim Rollenhagen 
> wrote:
>>
>> On Wed, Apr 4, 2018 at 8:39 AM, Dan Prince  wrote:
>>>
>>> Kind of a support question but figured I'd ask here in case there are
>>> suggestions for workarounds for specific machines.
>>>
>>> Setting up a new rack of mixed machines this week and hit this issue
>>> with HP machines using the ipmi power driver for Ironic. Curious if
>>> anyone else has seen this before? The same commands work great with my
>>> Dell boxes!
>>>
>>> -
>>>
>>> [root@localhost ~]# cat x.sh
>>> set -x
>>> # this is how Ironic sends its IPMI commands it fails
>>> echo -n password > /tmp/tmprmdOOv
>>> ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv
>>> power status
>>>
>>> # this works great
>>> ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power
>>> status
>>>
>>> [root@localhost ~]# bash x.sh
>>> + echo -n password
>>> + ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv
>>> power status
>>> Error: Unable to establish IPMI v2 / RMCP+ session
>>> + ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power
>>> status
>>> Chassis Power is on
>>
>>
>> Very strange. A tcpdump of both would probably be enlightening. :)
>>
>> Also curious what version of ipmitool this is, maybe you're hitting an old
>> bug.
>
>
> https://sourceforge.net/p/ipmitool/bugs/90/ would seem like a prime suspect
> here.

Thanks for the suggestion Jim! So I tried a few very short passwords
and no dice so far. Looking into the tcpdump info a bit now.

I'm in a bit of a rush so I may hack in a quick patch Ironic to make
ipmitool to use the -P option to proceed and loop back to fix this a
bit later.

Dan

>
> // jim
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ironic][Tripleo] ipmitool issues HP machines

2018-04-05 Thread Dan Prince
On Wed, Apr 4, 2018 at 1:18 PM, Jim Rollenhagen  wrote:
> On Wed, Apr 4, 2018 at 8:39 AM, Dan Prince  wrote:
>>
>> Kind of a support question but figured I'd ask here in case there are
>> suggestions for workarounds for specific machines.
>>
>> Setting up a new rack of mixed machines this week and hit this issue
>> with HP machines using the ipmi power driver for Ironic. Curious if
>> anyone else has seen this before? The same commands work great with my
>> Dell boxes!
>>
>> -
>>
>> [root@localhost ~]# cat x.sh
>> set -x
>> # this is how Ironic sends its IPMI commands it fails
>> echo -n password > /tmp/tmprmdOOv
>> ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv
>> power status
>>
>> # this works great
>> ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power
>> status
>>
>> [root@localhost ~]# bash x.sh
>> + echo -n password
>> + ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv
>> power status
>> Error: Unable to establish IPMI v2 / RMCP+ session
>> + ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power
>> status
>> Chassis Power is on
>
>
> Very strange. A tcpdump of both would probably be enlightening. :)

Ack, I will see about getting these.

>
> Also curious what version of ipmitool this is, maybe you're hitting an old
> bug.

RHEL 7.5 so this:

ipmitool-1.8.18-7.el7.rpm

Dan

>
> // jim
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Do we want new meeting time?

2018-04-05 Thread Ivan Kolodyazhny
Hi team,

It's a friendly reminder that we've got voting open [1] until next meeting.
If you would like to attend Horizon meetings, please, select comfortable
options for you.

[1] https://doodle.com/poll/ei5gstt73d8v3a35

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Wed, Mar 21, 2018 at 6:40 PM, Ivan Kolodyazhny  wrote:

> Hi team,
>
> As was discussed at PTG, usually we've got a very few participants on our
> weekly meetings. I hope, mostly it's because of not comfort meeting time
> for many of us.
>
> Let's try to re-schedule Horizon weekly meetings and get more attendees
> there. I've created a doodle for it [1]. Please, vote for the best time for
> you.
>
>
> [1] https://doodle.com/poll/ei5gstt73d8v3a35
>
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [OVN] Tempest API / Scenario tests and OVN metadata

2018-04-05 Thread Lucas Alvares Gomes
Hi,

The tests below are failing in the tempest API / Scenario job that
runs in the networking-ovn gate (non-voting):

neutron_tempest_plugin.api.admin.test_quotas_negative.QuotasAdminNegativeTestJSON.test_create_port_when_quotas_is_full
neutron_tempest_plugin.api.test_routers.RoutersIpV6Test.test_router_interface_status
neutron_tempest_plugin.api.test_routers.RoutersTest.test_router_interface_status
neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_prefixlen
neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_quota
neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_subnet_cidr

Digging a bit into it I noticed that with the exception of the two
"test_router_interface_status" (ipv6 and ipv4) all other tests are
failing because the way metadata works in networking-ovn.

Taking the "test_create_port_when_quotas_is_full" as an example. The
reason why it fails is because when the OVN metadata is enabled,
networking-ovn will metadata port at the moment a network is created
[0] and that will already fulfill the quota limit set by that test
[1].

That port will also allocate an IP from the subnet which will cause
the rest of the tests to fail with a "No more IP addresses available
on network ..." error.

This is not very trivial to fix because:

1. Tempest should be backend agnostic. So, adding a conditional in the
tempest test to check whether OVN is being used or not doesn't sound
correct.

2. Creating a port to be used by the metadata agent is a core part of
the design implementation for the metadata functionality [2]

So, I'm sending this email to try to figure out what would be the best
approach to deal with this problem and start working towards having
that job to be voting in our gate. Here are some ideas:

1. Simple disable the tests that are affected by the metadata approach.

2. Disable metadata for the tempest API / Scenario tests (here's a
test patch doing it [3])

3. Same as 1. but also create similar tempest tests specific for OVN
somewhere else (in the networking-ovn tree?!)

What you think would be the best way to workaround this problem, any
other ideas ?

As for the "test_router_interface_status" tests that are failing
independent of the metadata, there's a bug reporting the problem here
[4]. So we should just fix it.

[0] 
https://github.com/openstack/networking-ovn/blob/f3f5257fc465bbf44d589cc16e9ef7781f6b5b1d/networking_ovn/common/ovn_client.py#L1154
[1] 
https://github.com/openstack/neutron-tempest-plugin/blob/35bf37d1830328d72606f9c790b270d4fda2b854/neutron_tempest_plugin/api/admin/test_quotas_negative.py#L66
[2] 
https://docs.openstack.org/networking-ovn/latest/contributor/design/metadata_api.html#overview-of-proposed-approach
[3] https://review.openstack.org/#/c/558792/
[4] https://bugs.launchpad.net/networking-ovn/+bug/1713835

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap

2018-04-05 Thread Javier Pena


- Original Message -
> On Tue, Apr 3, 2018 at 10:00 AM, Javier Pena  wrote:
> >
> >> Greeting folks,
> >>
> >> During the last PTG we spent time discussing some ideas around an
> >> All-In-One
> >> installer, using 100% of the TripleO bits to deploy a single node
> >> OpenStack
> >> very similar with what we have today with the containerized undercloud and
> >> what we also have with other tools like Packstack or Devstack.
> >>
> >> https://etherpad.openstack.org/p/tripleo-rocky-all-in-one
> >>
> >
> > I'm really +1 to this. And as a Packstack developer, I'd love to see this
> > as a
> > mid-term Packstack replacement. So let's dive into the details.
> 
> Curious on this one actually, do you see a need for continued
> baremetal support? Today we support both baremetal and containers.
> Perhaps "support" is a strong word. We support both in terms of
> installation but only containers now have fully supported upgrades.
> 
> The interfaces we have today still support baremetal and containers
> but there were some suggestions about getting rid of baremetal support
> and only having containers. If we were to remove baremetal support
> though, Could we keep the Packstack case intact by just using
> containers instead?

I don't have a strong opinion on this. As long as we can update containers with 
under-review packages, I guess we should be ok. That said, I still think some 
kind of baremetal support is a good idea to catch co-installability issues, if 
it is not very expensive to mantain.

Regards,
Javier

> 
> Dan
> 
> >
> >> One of the problems that we're trying to solve here is to give a simple
> >> tool
> >> for developers so they can both easily and quickly deploy an OpenStack for
> >> their needs.
> >>
> >> "As a developer, I need to deploy OpenStack in a VM on my laptop, quickly
> >> and
> >> without complexity, reproducing the same exact same tooling as TripleO is
> >> using."
> >> "As a Neutron developer, I need to develop a feature in Neutron and test
> >> it
> >> with TripleO in my local env."
> >> "As a TripleO dev, I need to implement a new service and test its
> >> deployment
> >> in my local env."
> >> "As a developer, I need to reproduce a bug in TripleO CI that blocks the
> >> production chain, quickly and simply."
> >>
> >
> > "As a packager, I want an easy/low overhead way to test updated packages
> > with TripleO bits, so I can make sure they will not break any automation".
> >
> >> Probably more use cases, but to me that's what came into my mind now.
> >>
> >> Dan kicked-off a doc patch a month ago:
> >> https://review.openstack.org/#/c/547038/
> >> And I just went ahead and proposed a blueprint:
> >> https://blueprints.launchpad.net/tripleo/+spec/all-in-one
> >> So hopefully we can start prototyping something during Rocky.
> >>
> >> Before talking about the actual implementation, I would like to gather
> >> feedback from people interested by the use-cases. If you recognize
> >> yourself
> >> in these use-cases and you're not using TripleO today to test your things
> >> because it's too complex to deploy, we want to hear from you.
> >> I want to see feedback (positive or negative) about this idea. We need to
> >> gather ideas, use cases, needs, before we go design a prototype in Rocky.
> >>
> >
> > I would like to offer help with initial testing once there is something in
> > the repos, so count me in!
> >
> > Regards,
> > Javier
> >
> >> Thanks everyone who'll be involved,
> >> --
> >> Emilien Macchi
> >>
> >> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-04-05 Thread Gorka Eguileor
On 04/04, Matt Riedemann wrote:
> On 4/2/2018 6:59 AM, Gorka Eguileor wrote:
> > I can only see one benefit from implementing this feature in Cinder
> > versus doing it in Nova, and that is that we can preserve the volume's
> > UUID, but I don't think this is even relevant for this use case, so why
> > is it better to implement this in Cinder than in Nova?
>
> With a new image, the volume_image_metadata in the volume would also be
> wrong, and I don't think nova should (or even can) update that information.
> So nova re-imaging the volume doesn't seem like a good fit to me given
> Cinder "owns" the volume along with any metadata about it.
>
> If Cinder isn't agreeable to this new re-image API, then I think we're stuck

Hi Matt,

I didn't mean to imply that the Cinder team is against this proposal, I
just want to make sure that Cinder is the right place to do it and that
we will actually get some benefits from doing it in Cinder, because
right now I don't see that many...



> with the original proposal of creating a new volume and swapping out the
> root disk, along with all of the problems that can arise from that (original
> volume type is gone, tenant goes over-quota, what do we do with the original
> volume (delete it?), etc).
>
> --
>
> Thanks,
>
> Matt
>

This is what I thought the Nova alternative was, so that's why I didn't
understand the image metadata issue.

For clarification, the original volume type cannot be gone, as the type
delete operation prevents used volume types to be deleted, and if for
some reason it were gone (though I don't see how) Cinder would find
itself with the exact same problem, so there's no difference here.

The flow you are describing is basically what the generic implementation
for that functionality would do in Cinder:

- Create a new volume from image using the same volume type
- Swap the volume information like we do in the live migration case
- Delete the original volume
- Nova will have to swap the root volume (request new connection
  information for that volume and attach it to the node).

Because the alternative is for Cinder to download the image and dd it
into the original volume, which breaks all the optimizations that Cinder
has for speed and storage saving in the backend (there would be no
cloning).

So reading your response I expand the benefits to 2 if done by Cinder:

- Preserve volume UUID
- Remove unlikely race condition of someone deleting the volume type
  between Nova deleting the original volume and creating the new one (in
  this order to avoid the quota issue) when there is no other volume
  using that volume type.

I guess the user facing volume UUID preservation is good enough reason
to have this API in Cinder, as one would assume re-imaging a volume
would never result in having a new volume ID.

But just to be clear, Nova will have to initialize the connection with
the re-imagined volume and attach it again to the node, as in all cases
(except when defaulting to downloading the image and dd-ing it to the
volume) the result will be a new volume in the backend.

Cheers,
Gorka.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap

2018-04-05 Thread Bogdan Dobrelya

On 4/3/18 9:57 PM, Wesley Hayutin wrote:



On Tue, 3 Apr 2018 at 13:53 Dan Prince > wrote:


On Tue, Apr 3, 2018 at 10:00 AM, Javier Pena > wrote:
 >
 >> Greeting folks,
 >>
 >> During the last PTG we spent time discussing some ideas around
an All-In-One
 >> installer, using 100% of the TripleO bits to deploy a single
node OpenStack
 >> very similar with what we have today with the containerized
undercloud and
 >> what we also have with other tools like Packstack or Devstack.
 >>
 >> https://etherpad.openstack.org/p/tripleo-rocky-all-in-one
 >>
 >
 > I'm really +1 to this. And as a Packstack developer, I'd love to
see this as a
 > mid-term Packstack replacement. So let's dive into the details.

Curious on this one actually, do you see a need for continued
baremetal support? Today we support both baremetal and containers.
Perhaps "support" is a strong word. We support both in terms of
installation but only containers now have fully supported upgrades.

The interfaces we have today still support baremetal and containers
but there were some suggestions about getting rid of baremetal support
and only having containers. If we were to remove baremetal support
though, Could we keep the Packstack case intact by just using
containers instead?

Dan


Hey couple thoughts..
1.  I've added this topic to the RDO meeting tomorrow.
2.  Just a thought, the "elf owl" is the worlds smallest owl at least 
according to the internets   Maybe the all in one could be nick named 
tripleo elf?  Talon is cool too.


+1 for elf as a smallest owl :)


3.  From a CI perspective, I see this being very help with:
   a: faster run times generally, but especially for an upgrade tests.  
It may be possible to have upgrades gating tripleo projects again.

   b: enabling more packaging tests to be done with TripleO
   c: If developers dig it, we have a better chance at getting TripleO 
into other project's check jobs / third party jobs where current 
requirements and run times are prohibitive.
   d: Generally speaking replacing packstack / devstack in devel and CI 
workflows  where it still exists.

   e: Improved utilization of our resources in RDO-Cloud

It would be interesting to me to see more design and a little more 
thought into the potential use cases before we get far along.  Looks 
like there is a good start to that here [2].

I'll add some comments with the potential use cases for CI.

/me is very happy to see this moving! Thanks all

[1] https://en.wikipedia.org/wiki/Elf_owl
[2] 
https://review.openstack.org/#/c/547038/1/doc/source/install/advanced_deployment/all_in_one.rst



 >
 >> One of the problems that we're trying to solve here is to give a
simple tool
 >> for developers so they can both easily and quickly deploy an
OpenStack for
 >> their needs.
 >>
 >> "As a developer, I need to deploy OpenStack in a VM on my
laptop, quickly and
 >> without complexity, reproducing the same exact same tooling as
TripleO is
 >> using."
 >> "As a Neutron developer, I need to develop a feature in Neutron
and test it
 >> with TripleO in my local env."
 >> "As a TripleO dev, I need to implement a new service and test
its deployment
 >> in my local env."
 >> "As a developer, I need to reproduce a bug in TripleO CI that
blocks the
 >> production chain, quickly and simply."
 >>
 >
 > "As a packager, I want an easy/low overhead way to test updated
packages with TripleO bits, so I can make sure they will not break
any automation".
 >
 >> Probably more use cases, but to me that's what came into my mind
now.
 >>
 >> Dan kicked-off a doc patch a month ago:
 >> https://review.openstack.org/#/c/547038/
 >> And I just went ahead and proposed a blueprint:
 >> https://blueprints.launchpad.net/tripleo/+spec/all-in-one
 >> So hopefully we can start prototyping something during Rocky.
 >>
 >> Before talking about the actual implementation, I would like to
gather
 >> feedback from people interested by the use-cases. If you
recognize yourself
 >> in these use-cases and you're not using TripleO today to test
your things
 >> because it's too complex to deploy, we want to hear from you.
 >> I want to see feedback (positive or negative) about this idea.
We need to
 >> gather ideas, use cases, needs, before we go design a prototype
in Rocky.
 >>
 >
 > I would like to offer help with initial testing once there is
something in the repos, so count me in!
 >
 > Regards,
 > Javier
 >
 >> Thanks everyone who'll be involved,
 >> --
 >> Emilien Macchi
 >>
 >>

[openstack-dev] [qa] QA Office Hours 9:00 UTC is cancelled

2018-04-05 Thread Ghanshyam Mann
Hi All,

Today QA Office Hours @9:00 UTC is canceled due to unavailability of members.

-gmann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev