Re: [openstack-dev] [kolla] [tripleo] On moving start scripts out of Kolla images

2018-04-06 Thread Steven Dake (stdake)
+1.


From: Mark Goddard 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, April 6, 2018 at 11:41 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [kolla] [tripleo] On moving start scripts out of 
Kolla images

One benefit of the kolla API that I've not seen mentioned yet (sorry if I 
missed it) is that you can change files on the host without affecting the 
running container. Bind mounts don't have this property. This is handy for 
reconfiguration/upgrade operations, where we write out a new set of config 
before recreating/restarting the container. COPY_ONCE is the king of immutable 
here, but even for COPY_ALWAYS, this works as long as the container doesn't 
restart while the config files are being written.

Mark

On 5 April 2018 at 21:41, Michał Jastrzębski 
> wrote:
So I'll re-iterate comment which I made in BCN. In previous thread we
praised how Kolla provided stable API for images, and I agree that it
was great design choice (to provide stable API, not necessarily how
API looks), and this change would break it. So *if* we decide to do
it, we need to follow deprecation, that means we could deprecate these
files in this release and start removing them in next.

Support for LOCI in kolla-ansible is good thing, but I don't think
changing Kolla image API is required for that. LOCI provides base
image arument, so we could simply create base-image with all the
extended-start and set-config mechanisms and some shim to source
extended-start script that belongs to particular container. We will
need kolla layer image anyway because set_config is there to stay (as
Martin pointed out it's valuable tool fixing real issue and it's used
by more projects than just kolla-ansible). We could add another script
that would look like extended_start.sh -> source
$CONTAINER_NAME-extended-start.sh and copy all kolla's extended start
scripts to dir with proper naming (I believe this is solution that Sam
came up with shortly after BCN). This is purely techincal and not that
hard to do, much quicker and easier than deprecating API...

On 5 April 2018 at 12:28, Martin André 
> wrote:
> On Thu, Apr 5, 2018 at 2:16 PM, Paul Bourke 
> > wrote:
>> Hi all,
>>
>> This mail is to serve as a follow on to the discussion during yesterday's
>> team meeting[4], which was regarding the desire to move start scripts out of
>> the kolla images [0]. There's a few factors at play, and it may well be best
>> left to discuss in person at the summit in May, but hopefully we can get at
>> least some of this hashed out before then.
>>
>> I'll start by summarising why I think this is a good idea, and then attempt
>> to address some of the concerns that have come up since.
>>
>> First off, to be frank, this is effort is driven by wanting to add support
>> for loci images[1] in kolla-ansible. I think it would be unreasonable for
>> anyone to argue this is a bad objective to have, loci images have very
>> obvious benefits over what we have in Kolla today. I'm not looking to drop
>> support for Kolla images at all, I simply want to continue decoupling things
>> to the point where operators can pick and choose what works best for them.
>> Stemming from this, I think moving these scripts out of the images provides
>> a clear benefit to our consumers, both users of kolla and third parties such
>> as triple-o. Let me explain why.
>
> It's still very obscure to me how removing the scripts from kolla
> images will benefit consumers. If the reason is that you want to
> re-use them in other, non-kolla images, I believe we should package
> the scripts. I've left some comments in your spec review.
>
>> Normally, to run a docker image, a user will do 'docker run
>> helloworld:latest'. In any non trivial application, config needs to be
>> provided. In the vast majority of cases this is either provided via a bind
>> mount (docker run -v hello.conf:/etc/hello.conf helloworld:latest), or via
>> environment variables (docker run --env HELLO=paul helloworld:latest). This
>> is all bog standard stuff, something anyone who's spent an hour learning
>> docker can understand.
>>
>> Now, lets say someone wants to try out OpenStack with Docker, and they look
>> at Kolla. First off they have to look at something called set_configs.py[2]
>> - over 400 lines of Python. Next they need to understand what that script
>> consumes, config.json [3]. The only reference for config.json is the files
>> that live in kolla-ansible, a mass of jinja and assumptions about how the
>> service will be run. Next, they need to figure out how to bind mount the
>> config files and config.json into the container in a way that can be
>> consumed by set_configs.py (which by the way, 

[openstack-dev] [keystone] Keystone Team Update - Week of 2 April 2018

2018-04-06 Thread Colleen Murphy
# Keystone Team Update - Week of 2 April 2018

## News

Relatively quiet week. Most of our activity was focused on polishing up specs.

## Open Specs

Search query: https://goo.gl/eyTktx

No new specs have been proposed since last week. We're getting some good 
feedback on the cross-project spec to implement default roles[1], which will 
need more discussion and clarification. One hot debate was (is?) over what the 
role names should be (as a team we're really good at naming things).

The JWT spec[2] also needs some attentive eyes on it, and the unified limits 
spec[3] may need to have its scope narrowed down. The application credentials 
spec[4] is probably one or two revisions away from being ready to merge.

[1] https://review.openstack.org/523973
[2] https://review.openstack.org/541903
[3] https://review.openstack.org/540803
[4] https://review.openstack.org/396331

## Recently Merged Changes

Search query: https://goo.gl/hdD9Kw

We merged 8 changes this week.

## Changes that need Attention

Search query: https://goo.gl/tW5PiH

There are 38 changes that are passing CI, not in merge conflict, have no 
negative reviews and aren't proposed by bots.

## Milestone Outlook

https://releases.openstack.org/rocky/schedule.html

The keystone spec proposal freeze is in two weeks.

## Help with this newsletter

Help contribute to this newsletter by editing the etherpad: 
https://etherpad.openstack.org/p/keystone-team-newsletter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] [tripleo] On moving start scripts out of Kolla images

2018-04-06 Thread Mark Goddard
One benefit of the kolla API that I've not seen mentioned yet (sorry if I
missed it) is that you can change files on the host without affecting the
running container. Bind mounts don't have this property. This is handy for
reconfiguration/upgrade operations, where we write out a new set of config
before recreating/restarting the container. COPY_ONCE is the king of
immutable here, but even for COPY_ALWAYS, this works as long as the
container doesn't restart while the config files are being written.

Mark

On 5 April 2018 at 21:41, Michał Jastrzębski  wrote:

> So I'll re-iterate comment which I made in BCN. In previous thread we
> praised how Kolla provided stable API for images, and I agree that it
> was great design choice (to provide stable API, not necessarily how
> API looks), and this change would break it. So *if* we decide to do
> it, we need to follow deprecation, that means we could deprecate these
> files in this release and start removing them in next.
>
> Support for LOCI in kolla-ansible is good thing, but I don't think
> changing Kolla image API is required for that. LOCI provides base
> image arument, so we could simply create base-image with all the
> extended-start and set-config mechanisms and some shim to source
> extended-start script that belongs to particular container. We will
> need kolla layer image anyway because set_config is there to stay (as
> Martin pointed out it's valuable tool fixing real issue and it's used
> by more projects than just kolla-ansible). We could add another script
> that would look like extended_start.sh -> source
> $CONTAINER_NAME-extended-start.sh and copy all kolla's extended start
> scripts to dir with proper naming (I believe this is solution that Sam
> came up with shortly after BCN). This is purely techincal and not that
> hard to do, much quicker and easier than deprecating API...
>
> On 5 April 2018 at 12:28, Martin André  wrote:
> > On Thu, Apr 5, 2018 at 2:16 PM, Paul Bourke 
> wrote:
> >> Hi all,
> >>
> >> This mail is to serve as a follow on to the discussion during
> yesterday's
> >> team meeting[4], which was regarding the desire to move start scripts
> out of
> >> the kolla images [0]. There's a few factors at play, and it may well be
> best
> >> left to discuss in person at the summit in May, but hopefully we can
> get at
> >> least some of this hashed out before then.
> >>
> >> I'll start by summarising why I think this is a good idea, and then
> attempt
> >> to address some of the concerns that have come up since.
> >>
> >> First off, to be frank, this is effort is driven by wanting to add
> support
> >> for loci images[1] in kolla-ansible. I think it would be unreasonable
> for
> >> anyone to argue this is a bad objective to have, loci images have very
> >> obvious benefits over what we have in Kolla today. I'm not looking to
> drop
> >> support for Kolla images at all, I simply want to continue decoupling
> things
> >> to the point where operators can pick and choose what works best for
> them.
> >> Stemming from this, I think moving these scripts out of the images
> provides
> >> a clear benefit to our consumers, both users of kolla and third parties
> such
> >> as triple-o. Let me explain why.
> >
> > It's still very obscure to me how removing the scripts from kolla
> > images will benefit consumers. If the reason is that you want to
> > re-use them in other, non-kolla images, I believe we should package
> > the scripts. I've left some comments in your spec review.
> >
> >> Normally, to run a docker image, a user will do 'docker run
> >> helloworld:latest'. In any non trivial application, config needs to be
> >> provided. In the vast majority of cases this is either provided via a
> bind
> >> mount (docker run -v hello.conf:/etc/hello.conf helloworld:latest), or
> via
> >> environment variables (docker run --env HELLO=paul helloworld:latest).
> This
> >> is all bog standard stuff, something anyone who's spent an hour learning
> >> docker can understand.
> >>
> >> Now, lets say someone wants to try out OpenStack with Docker, and they
> look
> >> at Kolla. First off they have to look at something called
> set_configs.py[2]
> >> - over 400 lines of Python. Next they need to understand what that
> script
> >> consumes, config.json [3]. The only reference for config.json is the
> files
> >> that live in kolla-ansible, a mass of jinja and assumptions about how
> the
> >> service will be run. Next, they need to figure out how to bind mount the
> >> config files and config.json into the container in a way that can be
> >> consumed by set_configs.py (which by the way, requires the base kolla
> image
> >> in all cases). This is only for the config. For the service start up
> >> command, this need to also be provided in config.json. This command is
> then
> >> parsed out and written to a location in the image, which is consumed by
> a
> >> series of start/extend start shell scripts. 

Re: [openstack-dev] [nova] [placement] placement update 18-14

2018-04-06 Thread Jay Pipes

Thanks, as always, for the excellent summary emails, Chris. Comments inline.

On 04/06/2018 01:54 PM, Chris Dent wrote:


This is "contract" style update. New stuff will not be added to the
lists.

# Most Important

There doesn't appear to be anything new with regard to most
important. That which was important remains important. At the
scheduler team meeting at the start of the week there was talk of
working out ways to trim the amount of work in progress by using the
nova priorities tracking etherpad to help sort things out:

     https://etherpad.openstack.org/p/rocky-nova-priorities-tracking

Update provider tree and nested allocation candidates remain
critical basic functionality on which much else is based. With most
of provider tree done, it's really on nested allocation candidates.


Yup. And that series is deadlocked on a disagreement about whether 
granular request groups should be "separate by default" (meaning: if you 
request multiple groups of resources, the expectation is that they will 
be served by distinct resource providers) or "unrestricted by default" 
(meaning: if you request multiple groups of resources, those resources 
may or may not be serviced by distinct resource providers).


For folk's information, the latter (unrestricted by default) is the 
*existing* behaviour as outlined in the granular request groups spec:


http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/granular-resource-requests.html

Specifically, it is Requirement 3 on the above spec that is the primary 
driver for this debate.


I currently have an action item to resolve this debate and move forward 
with a decision, whatever that may be.



# What's Changed

Quite a bit of provider tree related code has merged.

Some negotiation happened with regard to when/if the fixes for
shared providers is going to happen. I'm not sure how that resolved,
if someone can follow up with that, that would be most excellent.


Sharing providers are in a weird place right now, agreed. We have landed 
lots of code on the placement side of the house for handling sharing 
providers. However, the nova-compute service still does not know about 
the providers that share resources with it. This makes it impossible 
right now to have a compute node with local disk storage as well as 
shared disk resources.



Most of the placement-req-filter series merged.

The spec for error codes in the placement API merged (code is in
progress and ready for review, see below).

# Questions

* Eric and I discussed earlier in the week that it might be a good
   time to start an #openstack-placement IRC channel, for two main
   reasons: break things up so as to limit the crosstalk in the often
   very busy #openstack-nova channel and to lend a bit of momentum
   for going in that direction. Is this okay with everyone? If not,
   please say so, otherwise I'll make it happen soon.


Cool with me. I know Matt has wanted a separate placement channel for a 
while now.



* Shared providers status?
   (I really think we need to make this go. It was one of the
   original value propositions of placement: being able to accurate
   manage shared disk.)


Agreed, but you know NUMA. And CPU pinning. And vGPUs. And FPGAs. 
And physnet network bandwidth scheduling. And... well, you get the idea.


Best,
-jay


# Bugs

* Placement related bugs not yet in progress:  https://goo.gl/TgiPXb
    15, -1 on last week
* In progress placement bugs: https://goo.gl/vzGGDQ
    13, +1 on last week

# Specs

These seem to be divided into three classes:

* Normal stuff
* Old stuff not getting attention or newer stuff that ought to be
   abandoned because of lack of support
* Anything related to the client side of using nested providers
   effectively. This apparently needs a lot of thinking. If there are
   some general sticking points we can extract and resolve, that
   might help move the whole thing forward?

* https://review.openstack.org/#/c/549067/
   VMware: place instances on resource pool
   (using update_provider_tree)

* https://review.openstack.org/#/c/545057/
   mirror nova host aggregates to placement API

* https://review.openstack.org/#/c/552924/
  Proposes NUMA topology with RPs

* https://review.openstack.org/#/c/544683/
  Account for host agg allocation ratio in placement

* https://review.openstack.org/#/c/552927/
  Spec for isolating configuration of placement database
  (This has a strong +2 on it but needs one more.)

* https://review.openstack.org/#/c/552105/
  Support default allocation ratios

* https://review.openstack.org/#/c/438640/
  Spec on preemptible servers

* https://review.openstack.org/#/c/556873/
    Handle nested providers for allocation candidates

* https://review.openstack.org/#/c/556971/
    Add Generation to Consumers

* https://review.openstack.org/#/c/557065/
    Proposes Multiple GPU types

* https://review.openstack.org/#/c/555081/
    Standardize CPU resource tracking

* 

Re: [openstack-dev] [kolla] [tripleo] On moving start scripts out of Kolla images

2018-04-06 Thread Mark Goddard
On Thu, 5 Apr 2018, 20:28 Martin André,  wrote:

> On Thu, Apr 5, 2018 at 2:16 PM, Paul Bourke 
> wrote:
> > Hi all,
> >
> > This mail is to serve as a follow on to the discussion during yesterday's
> > team meeting[4], which was regarding the desire to move start scripts
> out of
> > the kolla images [0]. There's a few factors at play, and it may well be
> best
> > left to discuss in person at the summit in May, but hopefully we can get
> at
> > least some of this hashed out before then.
> >
> > I'll start by summarising why I think this is a good idea, and then
> attempt
> > to address some of the concerns that have come up since.
> >
> > First off, to be frank, this is effort is driven by wanting to add
> support
> > for loci images[1] in kolla-ansible. I think it would be unreasonable for
> > anyone to argue this is a bad objective to have, loci images have very
> > obvious benefits over what we have in Kolla today. I'm not looking to
> drop
> > support for Kolla images at all, I simply want to continue decoupling
> things
> > to the point where operators can pick and choose what works best for
> them.
> > Stemming from this, I think moving these scripts out of the images
> provides
> > a clear benefit to our consumers, both users of kolla and third parties
> such
> > as triple-o. Let me explain why.
>
> It's still very obscure to me how removing the scripts from kolla
> images will benefit consumers. If the reason is that you want to
> re-use them in other, non-kolla images, I believe we should package
> the scripts. I've left some comments in your spec review.
>

+1 to extracting and packaging the kolla API. This will make it easier to
test and document, allow for versioning, and make it a first class citizen
rather than a file in the build context of the base image. Plus, if it
really is as good as some people are arguing, then it should be shared.

For many of the other helper scripts that get bundled into the kolla
images, I can see an argument for pulling these up to the deployment layer.
These could easily be moved to kolla-ansible, and added via config.json. I
guess it would be useful to know whether other deployment tools (tripleo)
are using any of these - if they are shared then perhaps the images are the
best place for them.


> > Normally, to run a docker image, a user will do 'docker run
> > helloworld:latest'. In any non trivial application, config needs to be
> > provided. In the vast majority of cases this is either provided via a
> bind
> > mount (docker run -v hello.conf:/etc/hello.conf helloworld:latest), or
> via
> > environment variables (docker run --env HELLO=paul helloworld:latest).
> This
> > is all bog standard stuff, something anyone who's spent an hour learning
> > docker can understand.
> >
> > Now, lets say someone wants to try out OpenStack with Docker, and they
> look
> > at Kolla. First off they have to look at something called
> set_configs.py[2]
> > - over 400 lines of Python. Next they need to understand what that script
> > consumes, config.json [3]. The only reference for config.json is the
> files
> > that live in kolla-ansible, a mass of jinja and assumptions about how the
> > service will be run. Next, they need to figure out how to bind mount the
> > config files and config.json into the container in a way that can be
> > consumed by set_configs.py (which by the way, requires the base kolla
> image
> > in all cases). This is only for the config. For the service start up
> > command, this need to also be provided in config.json. This command is
> then
> > parsed out and written to a location in the image, which is consumed by a
> > series of start/extend start shell scripts. Kolla is *unique* in this
> > regard, no other project in the container world is interfacing with
> images
> > in this way. Being a snowflake in this regard is not a good thing. I'm
> still
> > waiting to hear from a real world operator who would prefer to spend time
> > learning the above to doing:
>
> You're pointing a very real documentation issue. I've mentioned in the
> other kolla thread that I have a stub for the kolla API documentation.
> I'll push a patch for what I have and we can iterate on that.
>
> >   docker run -v /etc/keystone:/etc/keystone keystone:latest --entrypoint
> > /usr/bin/keystone [args]
> >
> > This is the Docker API, it's easy to understand and pretty much the
> standard
> > at this point.
>
> Sure, using the docker API works for simpler cases, not too
> surprisingly once you start doing more funky things with your
> containers you're quickly reach the docker API limitations. That's
> when the kolla API comes in handy.
> See for example this recent patch
> https://review.openstack.org/#/c/556673/ where we needed to change
> some file permission to the uid/gid of the user inside the container.
>
> The first iteration basically used the docker API and started an
> additional container to fix the permissions:
>
>   docker 

Re: [openstack-dev] [oslo.db] innodb OPTIMIZE TABLE ?

2018-04-06 Thread Michael Bayer
On Wed, Apr 4, 2018 at 5:00 AM, Gorka Eguileor  wrote:
> On 03/04, Jay Pipes wrote:
>> On 04/03/2018 11:07 AM, Michael Bayer wrote:
>> > The MySQL / MariaDB variants we use nowadays default to
>> > innodb_file_per_table=ON and we also set this flag to ON in installer
>> > tools like TripleO. The reason we like file per table is so that
>> > we don't grow an enormous ibdata file that can't be shrunk without
>> > rebuilding the database.  Instead, we have lots of little .ibd
>> > datafiles for each table throughout each openstack database.
>> >
>> > But now we have the issue that these files also can benefit from
>> > periodic optimization which can shrink them and also have a beneficial
>> > effect on performance.   The OPTIMIZE TABLE statement achieves this,
>> > but as would be expected it itself can lock tables for potentially a
>> > long time.   Googling around reveals a lot of controversy, as various
>> > users and publications suggest that OPTIMIZE is never needed and would
>> > have only a negligible effect on performance.   However here we seek
>> > to use OPTIMIZE so that we can reclaim disk space on tables that have
>> > lots of DELETE activity, such as keystone "token" and ceilometer
>> > "sample".
>> >
>> > Questions for the group:
>> >
>> > 1. is OPTIMIZE table worthwhile to be run for tables where the
>> > datafile has grown much larger than the number of rows we have in the
>> > table?
>>
>> Possibly, though it's questionable to use MySQL/InnoDB for storing transient
>> data that is deleted often like ceilometer samples and keystone tokens. A
>> much better solution is to use RDBMS partitioning so you can simply ALTER
>> TABLE .. DROP PARTITION those partitions that are no longer relevant (and
>> don't even bother DELETEing individual rows) or, in the case of Ceilometer
>> samples, don't use a traditional RDBMS for timeseries data at all...
>>
>> But since that is unfortunately already the case, yes it is probably a good
>> idea to OPTIMIZE TABLE on those tables.
>>
>> > 2. from people's production experience how safe is it to run OPTIMIZE,
>> > e.g. how long is it locking tables, etc.
>>
>> Is it safe? Yes.
>>
>> Does it lock the entire table for the duration of the operation? No. It uses
>> online DDL operations:
>>
>> https://dev.mysql.com/doc/refman/5.7/en/innodb-file-defragmenting.html
>>
>> Note that OPTIMIZE TABLE is mapped to ALTER TABLE tbl_name FORCE for InnoDB
>> tables.
>>
>> > 3. is there a heuristic we can use to measure when we might run this
>> > -.e.g my plan is we measure the size in bytes of each row in a table
>> > and then compare that in some ratio to the size of the corresponding
>> > .ibd file, if the .ibd file is N times larger than the logical data
>> > size we run OPTIMIZE ?
>>
>> I don't believe so, no. Most things I see recommended is to simply run
>> OPTIMIZE TABLE in a cron job on each table periodically.
>>
>> > 4. I'd like to propose this job of scanning table datafile sizes in
>> > ratio to logical data sizes, then running OPTIMIZE, be a utility
>> > script that is delivered via oslo.db, and would run for all innodb
>> > tables within a target MySQL/ MariaDB server generically.  That is, I
>> > really *dont* want this to be a script that Keystone, Nova, Ceilometer
>> > etc. are all maintaining delivering themselves.   this should be done
>> > as a generic pass on a whole database (noting, again, we are only
>> > running it for very specific InnoDB tables that we observe have a poor
>> > logical/physical size ratio).
>>
>> I don't believe this should be in oslo.db. This is strictly the purview of
>> deployment tools and should stay there, IMHO.
>>
>
> Hi,
>
> As far as I know most projects do "soft deletes" where we just flag the
> rows as deleted and don't remove them from the DB, so it's only when we
> use a management tool and run the "purge" command that we actually
> remove these rows.
>
> Since running the optimize without purging would be meaningless, I'm
> wondering if we should trigger the OPTIMIZE also within the purging
> code.  This way we could avoid innefective runs of the optimize command
> when no purge has happened and even when we do the optimization we could
> skip the ratio calculation altogether for tables where no rows have been
> deleted (the ratio hasn't changed).
>

the issue is that this OPTIMIZE will block on Galera unless it is run
on a per-individual node basis along with the changing of the
wsrep_OSU_method parameter, this is way out of scope both to be
redundantly hardcoded in multiple openstack projects, as well as
there's no portable way for Keystone and others to get at the
individual Galera node addresses.Putting it in oslo.db would at
least be a place that most of this logic can live but even then it
needs to run for multiple Galera nodes and needs to have
deployment-specific configuration.   *unless* we say, the OPTIMIZE
here will short for a purged table, let's just let it block.


> Ideally 

Re: [openstack-dev] [kolla] [tripleo] On moving start scripts out of Kolla images

2018-04-06 Thread Jeffrey Zhang
+1 for kolla-api

Migrate all scripts from kolla(image) to kolla-ansible, will make
image hard to use by
downstream. Martin explain this clearly. we need some API to make images
more easy to use.
For the operator, I don't think he needs to read all the set_config.py
file. Just knowing
how the config.json file looks like and effects of the file are enough. So
a doc is enough.


For images, we need to add some common functions before using them. Instead
of
using the upstream image directly. For example, if we support loci, mostly
we
will use upgrade infra images. like mariadb, redis etc. But is them really
enough
for production use directly? there is some concern here

- drop root. does it work when it runs without root?
- init process. Does it contain a init process binary?
- configuration. The different image may use different configuration
method. Should we need
  unify them?
- lack of packages. what the image lack some packages we needed?


One of a possible solution for this, I think, is use upstream image +
kolla-api to generate a
image with the features.

On Sat, Apr 7, 2018 at 6:47 AM, Steven Dake (stdake) 
wrote:

> Mark,
>
>
>
> TLDR good proposal
>
>
>
> I don’t think Paul was proposing what you proposed.  However:
>
>
>
> You make a strong case for separately packaging the api (mostly which Is
> setcfg.py and the json API + docs + samples).  I am super surprised nobody
> has ever proposed this in the past, but now is as good of a time as any to
> propose a good model for managing the JSON->setcfg.py API.  We could unit
> test this with extreme clarity, document with extreme clarity, and provide
> an easier path for people to submit changes to the API that they require to
> run the OpenStack containers.  Finally, it would provide complete semver
> semantics for managing change and provide perfect backwards compatibility.
>
>
>
> A separate repo for this proposed api split makes sense to me.  I think
> initially we would want to seed with the kolla core team but be open to
> anyone that reviews + contributes to join the kolla-api core team (just as
> happens with other kolla deliverables).
>
>
>
> This should reduce cross-project developer friction which was an implied
> but unstated problem in the various threads over the last week and produce
> the many other beneficial effects APIs produce along with the benefits you
> stated above.
>
>
>
> I’m not sure if this approach is technically sound –but I’d be in favor of
> this approach if it were not too disruptive, provided full backwards
> compatibility and was felt to be an improvement by the consumers of kolla
> images.  I don’t think deprecation is something that is all that viable
> with an API model like the one we have nor this new repo and think we need
> to set clear boundaries around what would/would not be done.
>
>
>
> I do know that a change of this magnitude is a lot of work for the
> community to take on – and just like adding or removing any deliverable in
> kolla, would require a majority vote from the CR team.
>
>
>
> Also, repeating myself, I don’t think the current API is good nor perfect,
> I don’t think perfection is necessarily possible, but this may help drive
> towards that mythical perfection that interested parties seek to achieve.
>
>
> Cheers
>
> -steve
>
>
>
> *From: *Mark Goddard 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Friday, April 6, 2018 at 12:30 PM
> *To: *"OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> *Subject: *Re: [openstack-dev] [kolla] [tripleo] On moving start scripts
> out of Kolla images
>
>
>
>
>
> On Thu, 5 Apr 2018, 20:28 Martin André,  wrote:
>
> On Thu, Apr 5, 2018 at 2:16 PM, Paul Bourke 
> wrote:
> > Hi all,
> >
> > This mail is to serve as a follow on to the discussion during yesterday's
> > team meeting[4], which was regarding the desire to move start scripts
> out of
> > the kolla images [0]. There's a few factors at play, and it may well be
> best
> > left to discuss in person at the summit in May, but hopefully we can get
> at
> > least some of this hashed out before then.
> >
> > I'll start by summarising why I think this is a good idea, and then
> attempt
> > to address some of the concerns that have come up since.
> >
> > First off, to be frank, this is effort is driven by wanting to add
> support
> > for loci images[1] in kolla-ansible. I think it would be unreasonable for
> > anyone to argue this is a bad objective to have, loci images have very
> > obvious benefits over what we have in Kolla today. I'm not looking to
> drop
> > support for Kolla images at all, I simply want to continue decoupling
> things
> > to the point where operators can pick and choose what works best for
> them.
> > Stemming from this, I think moving these scripts out of the images
> provides
> > 

Re: [openstack-dev] [nova] [placement] placement update 18-14

2018-04-06 Thread Eric Fried
>> it's really on nested allocation candidates.
> 
> Yup. And that series is deadlocked on a disagreement about whether
> granular request groups should be "separate by default" (meaning: if you
> request multiple groups of resources, the expectation is that they will
> be served by distinct resource providers) or "unrestricted by default"
> (meaning: if you request multiple groups of resources, those resources
> may or may not be serviced by distinct resource providers).

This is really a granular thing, not a nested thing.  I was holding up
the nrp-in-alloc-cands spec [1] for other reasons, but I've stopped
doing that now.  We should be able to proceed with the nrp work.  I'm
working on the granular code, wherein I can hopefully isolate the
separate-vs-unrestricted decision such that we can go either way once
that issue is resolved.

[1] https://review.openstack.org/#/c/556873/

>> Some negotiation happened with regard to when/if the fixes for
>> shared providers is going to happen. I'm not sure how that resolved,
>> if someone can follow up with that, that would be most excellent.

This is the subject of another thread [2] that's still "dangling".  We
discussed it in the sched meeting this week [3] and concluded [4] that
we shouldn't do it in Rocky.  BUT tetsuro later pointed out that part of
the series in question [5] is still needed to satisfy NRP-in-alloc-cands
(return the whole tree's providers in provider_summaries - even the ones
that aren't providing resource to the request).  That patch changes
behavior, so needs a microversion (mostly done already in that patch),
so needs a spec.  We haven't yet resolved whether this is truly needed,
so haven't assigned a body to the spec work.  I believe Jay is still
planning [6] to parse and respond to the ML thread.  After he clones
himself.

[2]
http://lists.openstack.org/pipermail/openstack-dev/2018-March/128944.html
[3]
http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-04-02-14.00.log.html#l-91
[4]
http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-04-02-14.00.log.html#l-137
[5] https://review.openstack.org/#/c/558045/
[6]
http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-04-02-14.00.log.html#l-104

>> * Shared providers status?
>>    (I really think we need to make this go. It was one of the
>>    original value propositions of placement: being able to accurate
>>    manage shared disk.)
> 
> Agreed, but you know NUMA. And CPU pinning. And vGPUs. And FPGAs.
> And physnet network bandwidth scheduling. And... well, you get the idea.

Right.  I will say that Tetsuro has been doing an excellent job slinging
code for this, though.  So the bottleneck is really reviewer bandwidth
(already an issue for the work we *are* trying to fit in Rocky).

If it's still on the table by Stein, we ought to consider making it a
high priority.  (Our Rocky punchlist seems to be favoring "urgent" over
"important" to some extent.)

-efried

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [OVN] Tempest API / Scenario tests and OVN metadata

2018-04-06 Thread Daniel Alvarez


> On 6 Apr 2018, at 19:04, Sławek Kapłoński  wrote:
> 
> Hi,
> 
> Another idea is to modify test that it will:
> 1. Check how many ports are in tenant,
> 2. Set quota to actual number of ports + 1 instead of hardcoded 1 as it is 
> now,
> 3. Try to add 2 ports - exactly as it is now,
> 
Cool, I like this one :-)
Good idea.

> I think that this should be still backend agnostic and should fix this 
> problem.
> 
>> Wiadomość napisana przez Sławek Kapłoński  w dniu 
>> 06.04.2018, o godz. 17:08:
>> 
>> Hi,
>> 
>> I don’t know how networking-ovn is working but I have one question.
>> 
>> 
>>> Wiadomość napisana przez Daniel Alvarez Sanchez  w 
>>> dniu 06.04.2018, o godz. 15:30:
>>> 
>>> Hi,
>>> 
>>> Thanks Lucas for writing this down.
>>> 
>>> On Thu, Apr 5, 2018 at 11:35 AM, Lucas Alvares Gomes 
>>>  wrote:
>>> Hi,
>>> 
>>> The tests below are failing in the tempest API / Scenario job that
>>> runs in the networking-ovn gate (non-voting):
>>> 
>>> neutron_tempest_plugin.api.admin.test_quotas_negative.QuotasAdminNegativeTestJSON.test_create_port_when_quotas_is_full
>>> neutron_tempest_plugin.api.test_routers.RoutersIpV6Test.test_router_interface_status
>>> neutron_tempest_plugin.api.test_routers.RoutersTest.test_router_interface_status
>>> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_prefixlen
>>> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_quota
>>> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_subnet_cidr
>>> 
>>> Digging a bit into it I noticed that with the exception of the two
>>> "test_router_interface_status" (ipv6 and ipv4) all other tests are
>>> failing because the way metadata works in networking-ovn.
>>> 
>>> Taking the "test_create_port_when_quotas_is_full" as an example. The
>>> reason why it fails is because when the OVN metadata is enabled,
>>> networking-ovn will metadata port at the moment a network is created
>>> [0] and that will already fulfill the quota limit set by that test
>>> [1].
>>> 
>>> That port will also allocate an IP from the subnet which will cause
>>> the rest of the tests to fail with a "No more IP addresses available
>>> on network ..." error.
>>> 
>>> With ML2/OVS we would run into the same Quota problem if DHCP would be
>>> enabled for the created subnets. This means that if we modify the current 
>>> tests
>>> to enable DHCP on them and we account this extra port it would be valid for
>>> all networking-ovn as well. Does it sound good or we still want to isolate 
>>> quotas?
>> 
>> If DHCP will be enabled for networking-ovn, will it use one more port also 
>> or not? If so then You will still have the same problem with DHCP as in 
>> ML2/OVS You will have one port created and for networking-ovn it will be 2 
>> ports.
>> If it’s not like that then I think that this solution, with some comment in 
>> test code why DHCP is enabled should be good IMO.
>> 
>>> 
>>> This is not very trivial to fix because:
>>> 
>>> 1. Tempest should be backend agnostic. So, adding a conditional in the
>>> tempest test to check whether OVN is being used or not doesn't sound
>>> correct.
>>> 
>>> 2. Creating a port to be used by the metadata agent is a core part of
>>> the design implementation for the metadata functionality [2]
>>> 
>>> So, I'm sending this email to try to figure out what would be the best
>>> approach to deal with this problem and start working towards having
>>> that job to be voting in our gate. Here are some ideas:
>>> 
>>> 1. Simple disable the tests that are affected by the metadata approach.
>>> 
>>> 2. Disable metadata for the tempest API / Scenario tests (here's a
>>> test patch doing it [3])
>>> 
>>> IMHO, we don't want to do this as metadata is likely to be enabled in all 
>>> the
>>> clouds either using ML2/OVS or OVN so it's good to keep exercising
>>> this part.
>>> 
>>> 
>>> 3. Same as 1. but also create similar tempest tests specific for OVN
>>> somewhere else (in the networking-ovn tree?!)
>>> 
>>> As we discussed on IRC I'm keen on doing this instead of getting bits in
>>> tempest to do different things depending on the backend used. Unless
>>> we want to enable DHCP on the subnets that these tests create :)
>>> 
>>> 
>>> What you think would be the best way to workaround this problem, any
>>> other ideas ?
>>> 
>>> As for the "test_router_interface_status" tests that are failing
>>> independent of the metadata, there's a bug reporting the problem here
>>> [4]. So we should just fix it.
>>> 
>>> [0] 
>>> https://github.com/openstack/networking-ovn/blob/f3f5257fc465bbf44d589cc16e9ef7781f6b5b1d/networking_ovn/common/ovn_client.py#L1154
>>> [1] 
>>> https://github.com/openstack/neutron-tempest-plugin/blob/35bf37d1830328d72606f9c790b270d4fda2b854/neutron_tempest_plugin/api/admin/test_quotas_negative.py#L66
>>> [2] 
>>> 

Re: [openstack-dev] [neutron] [OVN] Tempest API / Scenario tests and OVN metadata

2018-04-06 Thread Daniel Alvarez


> On 6 Apr 2018, at 19:04, Sławek Kapłoński  wrote:
> 
> Hi,
> 
> Another idea is to modify test that it will:
> 1. Check how many ports are in tenant,
> 2. Set quota to actual number of ports + 1 instead of hardcoded 1 as it is 
> now,
> 3. Try to add 2 ports - exactly as it is now,
> 
> I think that this should be still backend agnostic and should fix this 
> problem.
> 
>> Wiadomość napisana przez Sławek Kapłoński  w dniu 
>> 06.04.2018, o godz. 17:08:
>> 
>> Hi,
>> 
>> I don’t know how networking-ovn is working but I have one question.
>> 
>> 
>>> Wiadomość napisana przez Daniel Alvarez Sanchez  w 
>>> dniu 06.04.2018, o godz. 15:30:
>>> 
>>> Hi,
>>> 
>>> Thanks Lucas for writing this down.
>>> 
>>> On Thu, Apr 5, 2018 at 11:35 AM, Lucas Alvares Gomes 
>>>  wrote:
>>> Hi,
>>> 
>>> The tests below are failing in the tempest API / Scenario job that
>>> runs in the networking-ovn gate (non-voting):
>>> 
>>> neutron_tempest_plugin.api.admin.test_quotas_negative.QuotasAdminNegativeTestJSON.test_create_port_when_quotas_is_full
>>> neutron_tempest_plugin.api.test_routers.RoutersIpV6Test.test_router_interface_status
>>> neutron_tempest_plugin.api.test_routers.RoutersTest.test_router_interface_status
>>> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_prefixlen
>>> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_quota
>>> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_subnet_cidr
>>> 
>>> Digging a bit into it I noticed that with the exception of the two
>>> "test_router_interface_status" (ipv6 and ipv4) all other tests are
>>> failing because the way metadata works in networking-ovn.
>>> 
>>> Taking the "test_create_port_when_quotas_is_full" as an example. The
>>> reason why it fails is because when the OVN metadata is enabled,
>>> networking-ovn will metadata port at the moment a network is created
>>> [0] and that will already fulfill the quota limit set by that test
>>> [1].
>>> 
>>> That port will also allocate an IP from the subnet which will cause
>>> the rest of the tests to fail with a "No more IP addresses available
>>> on network ..." error.
>>> 
>>> With ML2/OVS we would run into the same Quota problem if DHCP would be
>>> enabled for the created subnets. This means that if we modify the current 
>>> tests
>>> to enable DHCP on them and we account this extra port it would be valid for
>>> all networking-ovn as well. Does it sound good or we still want to isolate 
>>> quotas?
>> 
>> If DHCP will be enabled for networking-ovn, will it use one more port also 
>> or not? If so then You will still have the same problem with DHCP as in 
>> ML2/OVS You will have one port created and for networking-ovn it will be 2 
>> ports.
>> If it’s not like that then I think that this solution, with some comment in 
>> test code why DHCP is enabled should be good IMO.
No, networking-ovn won’t create an extra port when DHCP is enabled so it should 
work fine.
Thanks Slaweq!
>> 
>>> 
>>> This is not very trivial to fix because:
>>> 
>>> 1. Tempest should be backend agnostic. So, adding a conditional in the
>>> tempest test to check whether OVN is being used or not doesn't sound
>>> correct.
>>> 
>>> 2. Creating a port to be used by the metadata agent is a core part of
>>> the design implementation for the metadata functionality [2]
>>> 
>>> So, I'm sending this email to try to figure out what would be the best
>>> approach to deal with this problem and start working towards having
>>> that job to be voting in our gate. Here are some ideas:
>>> 
>>> 1. Simple disable the tests that are affected by the metadata approach.
>>> 
>>> 2. Disable metadata for the tempest API / Scenario tests (here's a
>>> test patch doing it [3])
>>> 
>>> IMHO, we don't want to do this as metadata is likely to be enabled in all 
>>> the
>>> clouds either using ML2/OVS or OVN so it's good to keep exercising
>>> this part.
>>> 
>>> 
>>> 3. Same as 1. but also create similar tempest tests specific for OVN
>>> somewhere else (in the networking-ovn tree?!)
>>> 
>>> As we discussed on IRC I'm keen on doing this instead of getting bits in
>>> tempest to do different things depending on the backend used. Unless
>>> we want to enable DHCP on the subnets that these tests create :)
>>> 
>>> 
>>> What you think would be the best way to workaround this problem, any
>>> other ideas ?
>>> 
>>> As for the "test_router_interface_status" tests that are failing
>>> independent of the metadata, there's a bug reporting the problem here
>>> [4]. So we should just fix it.
>>> 
>>> [0] 
>>> https://github.com/openstack/networking-ovn/blob/f3f5257fc465bbf44d589cc16e9ef7781f6b5b1d/networking_ovn/common/ovn_client.py#L1154
>>> [1] 
>>> 

Re: [openstack-dev] [kolla] [tripleo] On moving start scripts out of Kolla images

2018-04-06 Thread Steven Dake (stdake)
Mark,

TLDR good proposal

I don’t think Paul was proposing what you proposed.  However:

You make a strong case for separately packaging the api (mostly which Is 
setcfg.py and the json API + docs + samples).  I am super surprised nobody has 
ever proposed this in the past, but now is as good of a time as any to propose 
a good model for managing the JSON->setcfg.py API.  We could unit test this 
with extreme clarity, document with extreme clarity, and provide an easier path 
for people to submit changes to the API that they require to run the OpenStack 
containers.  Finally, it would provide complete semver semantics for managing 
change and provide perfect backwards compatibility.

A separate repo for this proposed api split makes sense to me.  I think 
initially we would want to seed with the kolla core team but be open to anyone 
that reviews + contributes to join the kolla-api core team (just as happens 
with other kolla deliverables).

This should reduce cross-project developer friction which was an implied but 
unstated problem in the various threads over the last week and produce the many 
other beneficial effects APIs produce along with the benefits you stated above.

I’m not sure if this approach is technically sound –but I’d be in favor of this 
approach if it were not too disruptive, provided full backwards compatibility 
and was felt to be an improvement by the consumers of kolla images.  I don’t 
think deprecation is something that is all that viable with an API model like 
the one we have nor this new repo and think we need to set clear boundaries 
around what would/would not be done.

I do know that a change of this magnitude is a lot of work for the community to 
take on – and just like adding or removing any deliverable in kolla, would 
require a majority vote from the CR team.

Also, repeating myself, I don’t think the current API is good nor perfect, I 
don’t think perfection is necessarily possible, but this may help drive towards 
that mythical perfection that interested parties seek to achieve.

Cheers
-steve

From: Mark Goddard 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, April 6, 2018 at 12:30 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [kolla] [tripleo] On moving start scripts out of 
Kolla images


On Thu, 5 Apr 2018, 20:28 Martin André, 
> wrote:
On Thu, Apr 5, 2018 at 2:16 PM, Paul Bourke 
> wrote:
> Hi all,
>
> This mail is to serve as a follow on to the discussion during yesterday's
> team meeting[4], which was regarding the desire to move start scripts out of
> the kolla images [0]. There's a few factors at play, and it may well be best
> left to discuss in person at the summit in May, but hopefully we can get at
> least some of this hashed out before then.
>
> I'll start by summarising why I think this is a good idea, and then attempt
> to address some of the concerns that have come up since.
>
> First off, to be frank, this is effort is driven by wanting to add support
> for loci images[1] in kolla-ansible. I think it would be unreasonable for
> anyone to argue this is a bad objective to have, loci images have very
> obvious benefits over what we have in Kolla today. I'm not looking to drop
> support for Kolla images at all, I simply want to continue decoupling things
> to the point where operators can pick and choose what works best for them.
> Stemming from this, I think moving these scripts out of the images provides
> a clear benefit to our consumers, both users of kolla and third parties such
> as triple-o. Let me explain why.

It's still very obscure to me how removing the scripts from kolla
images will benefit consumers. If the reason is that you want to
re-use them in other, non-kolla images, I believe we should package
the scripts. I've left some comments in your spec review.

+1 to extracting and packaging the kolla API. This will make it easier to test 
and document, allow for versioning, and make it a first class citizen rather 
than a file in the build context of the base image. Plus, if it really is as 
good as some people are arguing, then it should be shared.

For many of the other helper scripts that get bundled into the kolla images, I 
can see an argument for pulling these up to the deployment layer. These could 
easily be moved to kolla-ansible, and added via config.json. I guess it would 
be useful to know whether other deployment tools (tripleo) are using any of 
these - if they are shared then perhaps the images are the best place for them.


> Normally, to run a docker image, a user will do 'docker run
> helloworld:latest'. In any non trivial application, config needs to be
> provided. In the vast majority of cases this is either provided via a bind
> mount 

Re: [openstack-dev] [neutron] [OVN] Tempest API / Scenario tests and OVN metadata

2018-04-06 Thread Miguel Angel Ajo Pelayo
this issue isn't only for networking ovn, please note that it happens with
a flew other vendor plugins (like nsx), at least this is something we have
found in downstream certifications.

Cheers,

On Sat, Apr 7, 2018, 12:36 AM Daniel Alvarez  wrote:

>
>
> > On 6 Apr 2018, at 19:04, Sławek Kapłoński  wrote:
> >
> > Hi,
> >
> > Another idea is to modify test that it will:
> > 1. Check how many ports are in tenant,
> > 2. Set quota to actual number of ports + 1 instead of hardcoded 1 as it
> is now,
> > 3. Try to add 2 ports - exactly as it is now,
> >
> Cool, I like this one :-)
> Good idea.
>
> > I think that this should be still backend agnostic and should fix this
> problem.
> >
> >> Wiadomość napisana przez Sławek Kapłoński  w dniu
> 06.04.2018, o godz. 17:08:
> >>
> >> Hi,
> >>
> >> I don’t know how networking-ovn is working but I have one question.
> >>
> >>
> >>> Wiadomość napisana przez Daniel Alvarez Sanchez 
> w dniu 06.04.2018, o godz. 15:30:
> >>>
> >>> Hi,
> >>>
> >>> Thanks Lucas for writing this down.
> >>>
> >>> On Thu, Apr 5, 2018 at 11:35 AM, Lucas Alvares Gomes <
> lucasago...@gmail.com> wrote:
> >>> Hi,
> >>>
> >>> The tests below are failing in the tempest API / Scenario job that
> >>> runs in the networking-ovn gate (non-voting):
> >>>
> >>>
> neutron_tempest_plugin.api.admin.test_quotas_negative.QuotasAdminNegativeTestJSON.test_create_port_when_quotas_is_full
> >>>
> neutron_tempest_plugin.api.test_routers.RoutersIpV6Test.test_router_interface_status
> >>>
> neutron_tempest_plugin.api.test_routers.RoutersTest.test_router_interface_status
> >>>
> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_prefixlen
> >>>
> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_quota
> >>>
> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_subnet_cidr
> >>>
> >>> Digging a bit into it I noticed that with the exception of the two
> >>> "test_router_interface_status" (ipv6 and ipv4) all other tests are
> >>> failing because the way metadata works in networking-ovn.
> >>>
> >>> Taking the "test_create_port_when_quotas_is_full" as an example. The
> >>> reason why it fails is because when the OVN metadata is enabled,
> >>> networking-ovn will metadata port at the moment a network is created
> >>> [0] and that will already fulfill the quota limit set by that test
> >>> [1].
> >>>
> >>> That port will also allocate an IP from the subnet which will cause
> >>> the rest of the tests to fail with a "No more IP addresses available
> >>> on network ..." error.
> >>>
> >>> With ML2/OVS we would run into the same Quota problem if DHCP would be
> >>> enabled for the created subnets. This means that if we modify the
> current tests
> >>> to enable DHCP on them and we account this extra port it would be
> valid for
> >>> all networking-ovn as well. Does it sound good or we still want to
> isolate quotas?
> >>
> >> If DHCP will be enabled for networking-ovn, will it use one more port
> also or not? If so then You will still have the same problem with DHCP as
> in ML2/OVS You will have one port created and for networking-ovn it will be
> 2 ports.
> >> If it’s not like that then I think that this solution, with some
> comment in test code why DHCP is enabled should be good IMO.
> >>
> >>>
> >>> This is not very trivial to fix because:
> >>>
> >>> 1. Tempest should be backend agnostic. So, adding a conditional in the
> >>> tempest test to check whether OVN is being used or not doesn't sound
> >>> correct.
> >>>
> >>> 2. Creating a port to be used by the metadata agent is a core part of
> >>> the design implementation for the metadata functionality [2]
> >>>
> >>> So, I'm sending this email to try to figure out what would be the best
> >>> approach to deal with this problem and start working towards having
> >>> that job to be voting in our gate. Here are some ideas:
> >>>
> >>> 1. Simple disable the tests that are affected by the metadata approach.
> >>>
> >>> 2. Disable metadata for the tempest API / Scenario tests (here's a
> >>> test patch doing it [3])
> >>>
> >>> IMHO, we don't want to do this as metadata is likely to be enabled in
> all the
> >>> clouds either using ML2/OVS or OVN so it's good to keep exercising
> >>> this part.
> >>>
> >>>
> >>> 3. Same as 1. but also create similar tempest tests specific for OVN
> >>> somewhere else (in the networking-ovn tree?!)
> >>>
> >>> As we discussed on IRC I'm keen on doing this instead of getting bits
> in
> >>> tempest to do different things depending on the backend used. Unless
> >>> we want to enable DHCP on the subnets that these tests create :)
> >>>
> >>>
> >>> What you think would be the best way to workaround this problem, any
> >>> other ideas ?
> >>>
> >>> As for the "test_router_interface_status" tests that are failing
> >>> independent of the 

Re: [openstack-dev] Replacing pbr's autodoc feature with sphinxcontrib-apidoc

2018-04-06 Thread Stephen Finucane
On Wed, 2018-03-28 at 15:31 +0100, Stephen Finucane wrote:
> As noted last week [1], we're trying to move away from pbr's autodoc
> feature as part of the new docs PTI. To that end, I've created
> sphinxcontrib-apidoc, which should do what pbr was previously doing for
> us by via a Sphinx extension.
> 
>   https://pypi.org/project/sphinxcontrib-apidoc/
> 
> This works by reading some configuration from your documentation's
> 'conf.py' file and using this to call 'sphinx-apidoc'. It means we no
> longer need pbr to do this for.
> 
> I have pushed version 0.1.0 to PyPi already but before I add this to
> global requirements, I'd like to ensure things are working as expected.
> smcginnis was kind enough to test this out on glance and it seemed to
> work for him but I'd appreciate additional data points. The
> configuration steps for this extension are provided in the above link.
> To test this yourself, you simply need to do the following:
> 
>1. Add 'sphinxcontrib-apidoc' to your test-requirements.txt or
>   doc/requirements.txt file
>2. Configure as noted above and remove the '[pbr]' and '[build_sphinx]'
>   configuration from 'setup.cfg'
>3. Replace 'python setup.py build_sphinx' with a call to 'sphinx-build'
>4. Run 'tox -e docs'
>5. Profit?
> 
> Be sure to let me know if anyone encounters issues. If not, I'll be
> pushing for this to be included in global requirements so we can start
> the migration.
> 
> Cheers,
> Stephen
> 
> [1] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128594.html

'sphinxcontrib.apidoc' has now been added to requirements [1]. The
README [2] provides a far more detailed overview of how one can migrate
from the pbr features than I gave above and I'd advise anyone making
changes to their documentation to follow that guide. Feel free to ping
me here or on IRC (stephenfin) if you've any questions.

Next up: deprecating this feature in pbr.

Stephen

[1] https://review.openstack.org/#/c/557532/
[2] https://github.com/sphinx-contrib/apidoc#migration-from-pbr

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [placement] placement update 18-14

2018-04-06 Thread Chris Dent


This is "contract" style update. New stuff will not be added to the
lists.

# Most Important

There doesn't appear to be anything new with regard to most
important. That which was important remains important. At the
scheduler team meeting at the start of the week there was talk of
working out ways to trim the amount of work in progress by using the
nova priorities tracking etherpad to help sort things out:

https://etherpad.openstack.org/p/rocky-nova-priorities-tracking

Update provider tree and nested allocation candidates remain
critical basic functionality on which much else is based. With most
of provider tree done, it's really on nested allocation candidates.

# What's Changed

Quite a bit of provider tree related code has merged.

Some negotiation happened with regard to when/if the fixes for
shared providers is going to happen. I'm not sure how that resolved,
if someone can follow up with that, that would be most excellent.

Most of the placement-req-filter series merged.

The spec for error codes in the placement API merged (code is in
progress and ready for review, see below).

# Questions

* Eric and I discussed earlier in the week that it might be a good
  time to start an #openstack-placement IRC channel, for two main
  reasons: break things up so as to limit the crosstalk in the often
  very busy #openstack-nova channel and to lend a bit of momentum
  for going in that direction. Is this okay with everyone? If not,
  please say so, otherwise I'll make it happen soon.

* Shared providers status?
  (I really think we need to make this go. It was one of the
  original value propositions of placement: being able to accurate
  manage shared disk.)

# Bugs

* Placement related bugs not yet in progress:  https://goo.gl/TgiPXb
   15, -1 on last week
* In progress placement bugs: https://goo.gl/vzGGDQ
   13, +1 on last week

# Specs

These seem to be divided into three classes:

* Normal stuff
* Old stuff not getting attention or newer stuff that ought to be
  abandoned because of lack of support
* Anything related to the client side of using nested providers
  effectively. This apparently needs a lot of thinking. If there are
  some general sticking points we can extract and resolve, that
  might help move the whole thing forward?

* https://review.openstack.org/#/c/549067/
  VMware: place instances on resource pool
  (using update_provider_tree)

* https://review.openstack.org/#/c/545057/
  mirror nova host aggregates to placement API

* https://review.openstack.org/#/c/552924/
 Proposes NUMA topology with RPs

* https://review.openstack.org/#/c/544683/
 Account for host agg allocation ratio in placement

* https://review.openstack.org/#/c/552927/
 Spec for isolating configuration of placement database
 (This has a strong +2 on it but needs one more.)

* https://review.openstack.org/#/c/552105/
 Support default allocation ratios

* https://review.openstack.org/#/c/438640/
 Spec on preemptible servers

* https://review.openstack.org/#/c/556873/
   Handle nested providers for allocation candidates

* https://review.openstack.org/#/c/556971/
   Add Generation to Consumers

* https://review.openstack.org/#/c/557065/
   Proposes Multiple GPU types

* https://review.openstack.org/#/c/555081/
   Standardize CPU resource tracking

* https://review.openstack.org/#/c/502306/
   Network bandwidth resource provider

* https://review.openstack.org/#/c/509042/
   Propose counting quota usage from placement

# Main Themes

## Update Provider Tree

Most of the main guts of this have merged (huzzah!). What's left are
some loose end details, and clean handling of aggregates:

https://review.openstack.org/#/q/topic:bp/update-provider-tree

## Nested providers in allocation candidates

Representing nested provides in the response to GET
/allocation_candidates is required to actually make use of all the
topology that update provider tree will report. That work is in
progress at:

https://review.openstack.org/#/q/topic:bp/nested-resource-providers

https://review.openstack.org/#/q/topic:bp/nested-resource-providers-allocation-candidates

Note that some of this includes the up-for-debate shared handling.

## Request Filters

As far as I can tell this is mostly done (yay!) but there is a loose
end: We merged an updated spec to support multiple member_of
parameters, but it's not clear anybody is currently owning that:

 https://review.openstack.org/#/c/555413/

## Mirror nova host aggregates to placement

This makes it so some kinds of aggregate filtering can be done
"placement side" by mirroring nova host aggregates into placement
aggregates.


https://review.openstack.org/#/q/topic:bp/placement-mirror-host-aggregates

It's part of what will make the req filters above useful.

## Forbidden Traits

A way of expressing "I'd like resources that do _not_ have trait X".
This is ready for review:

  

Re: [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release

2018-04-06 Thread Kashyap Chamarthy
On Thu, Apr 05, 2018 at 06:11:26PM -0500, Matt Riedemann wrote:
> On 4/5/2018 3:32 PM, Thomas Goirand wrote:
> > If you don't absolutely need new features from libvirt 3.2.0 and 3.0.0
> > is fine, please choose 3.0.0 as minimum.
> > 
> > If you don't absolutely need new features from qemu 2.9.0 and 2.8.0 is
> > fine, please choose 2.8.0 as minimum.
> > 
> > If you don't absolutely need new features from libguestfs 1.36 and 1.34
> > is fine, please choose 1.34 as minimum.
> 
> New features in the libvirt driver which depend on minimum versions of
> libvirt/qemu/libguestfs (or arch for that matter) are always conditional, so
> I think it's reasonable to go with the lower bound for Debian. We can still
> support the features for the newer versions if you're running a system with
> those versions, but not penalize people with slightly older versions if not.

Yep, we can trivially set the lower bound to versions in 'Stretch'.  The
intention was never to "penalize" distributions w/ older versions.  I
was just checking if Debian 'Stretch' users can be spared from the
myriad of CPU-modelling related issues (see my other reply for
specifics) that are all fixed with 3.2.0 (and QMEU 2.9.0) by default --
without spending inordinate amounts of time and messy backporting
procedures.  Since rest of all the other stable distributions are using
it.

I'll wait a day to hear from Zigo, then I'll just rewrite the patch[*] to
use what's currently in 'Stretch'.

[*] https://review.openstack.org/#/c/558171/

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release

2018-04-06 Thread Marcin Juszkiewicz
W dniu 06.04.2018 o 12:07, Kashyap Chamarthy pisze:
>> - libvirt 4.1.0 compiled without issue, though the dh_install phase
>> failed with this error:
>>
>> dh_install: Cannot find (any matches for) "/usr/lib/*/wireshark/" (tried
>> in "." and "debian/tmp")
>> dh_install: libvirt-wireshark missing files: /usr/lib/*/wireshark/
>> dh_install: missing files, aborting
> That seems like a problem in the Debian packaging system, not in
> libvirt.  I double-checked with the upstream folks, and the install
> rules for Wireshark plugin doesn't have /*/ in there.

Known bug in wireshark package:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=880428

Status: maybe one day...

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Following the new PTI for document build, broken local builds

2018-04-06 Thread Sean McGinnis
> > 
> > How can we enable warning_is_error in the gate with the new PTI? It's 
> > easy enough to add the -W flag in tox.ini for local builds, but as you 
> > say the tox job is never called in the gate. In the gate zuul checks for 
> > it in the [build_sphinx] section of setup.cfg:
> > 
> > https://git.openstack.org/cgit/openstack-infra/zuul-jobs/tree/roles/sphinx/library/sphinx_check_warning_is_error.py#n23
> > 
> > [...]
> 
> I'd be more in favour of changing the zuul job to build with the '-W'
> flag. To be honest, there is no good reason to not have this flag
> enabled. I'm not sure that will be a popular opinion though as it may
> break some projects' builds (correctly, but still).
> 
> I'll propose a patch against zuul-jobs and see what happens :)
> 
> Stephen
> 

I am in favor of this too. We will probably need to give some teams some time
to get warnings fixed though. I haven't done any kind of extensive audit of
projects, but from a few I looked through, there are definitely a few that are
not erroring on warnings and are likely to be blocked if we suddenly flipped
the switch and errored on those.

This is a legitimate issue though. In Cinder we had -W in the tox docs job, but
since that is no longer being enforced in the gate, running "tox -e docs" from
a fresh clone of master was failing. We really do need some way to enforce this
so things like that do not happen.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Adding a SDK to developer.openstack.org pages

2018-04-06 Thread Jeremy Stanley
On 2018-04-06 12:00:24 +1000 (+1000), Gilles Dubreuil wrote:
> I'd like to update the developer.openstack.org to add details about a new
> SDK.
> 
> What would be the corresponding repo? My searches landed me into
> https://docs.openstack.org/doc-contrib-guide/ which is about updating the
> docs.openstack.org but not developer.openstack.org. Is the developer section
> inside the docs section?

Looks like we could do a better job of linking to the relevant git
repositories from some documents.

I think the file you're looking for is probably:

https://git.openstack.org/cgit/openstack/api-site/tree/www/index.html

Happy hacking!
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Following the new PTI for document build, broken local builds

2018-04-06 Thread Stephen Finucane
On Thu, 2018-04-05 at 16:36 -0400, Zane Bitter wrote:
> On 21/03/18 06:49, Stephen Finucane wrote:
> > As noted by Monty in a prior openstack-dev post [2], some projects rely
> > on a pbr extension to the 'build_sphinx' setuptools command which can
> > automatically run the 'sphinx-apidoc' tool before building docs. This
> > is enabled by configuring some settings in the '[pbr]' section of the
> > 'setup.cfg' file [3]. To ensure this continued working, the zuul jobs
> > definitions [4] check for the presence of these settings and build docs
> > using the legacy 'build_sphinx' command if found. **At no point do the
> > jobs call the tox job**. As a result, if you convert a project to use
> > 'sphinx-build' in 'tox.ini' without resolving the autodoc issues, you
> > lose the ability to build docs locally.
> > 
> > I've gone through and proposed a couple of reverts to fix projects
> > we've already broken. However, going forward, there are two things
> > people should do to prevent issues like this popping up.
> > 
> >   * Firstly, you should remove the '[build_sphinx]' and '[pbr]' sections
> > from 'setup.cfg' in any patches that aim to convert a project to use
> > the new PTI. This will ensure the gate catches any potential
> > issues.
> 
> How can we enable warning_is_error in the gate with the new PTI? It's 
> easy enough to add the -W flag in tox.ini for local builds, but as you 
> say the tox job is never called in the gate. In the gate zuul checks for 
> it in the [build_sphinx] section of setup.cfg:
> 
> https://git.openstack.org/cgit/openstack-infra/zuul-jobs/tree/roles/sphinx/library/sphinx_check_warning_is_error.py#n23
> 
> So I think it makes more sense to remove the [pbr] section, but leave 
> the [build_sphinx] section?
> 
> thanks,
> Zane.

I'd be more in favour of changing the zuul job to build with the '-W'
flag. To be honest, there is no good reason to not have this flag
enabled. I'm not sure that will be a popular opinion though as it may
break some projects' builds (correctly, but still).

I'll propose a patch against zuul-jobs and see what happens :)

Stephen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL][PTLs] [Community goal] Toggle the debug option at runtime

2018-04-06 Thread Sławek Kapłoński
Hi,

One more question about implementation of this goal. Should we take care (and 
add to story board [1]) projects like:

openstack/neutron-lbaas
openstack/networking-cisco
openstack/networking-dpm
openstack/networking-infoblox
openstack/networking-l2gw
openstack/networking-lagopus
openstack/neutron-dynamic-routing

Which looks that should be probably also changed in some way. Or maybe list of 
affected projects in [1] is „closed” and if some project is not there it 
shouldn’t be changed to accomplish this community goal?

[1] https://storyboard.openstack.org/#!/story/2001545

—
Best regards
Slawek Kaplonski
sla...@kaplonski.pl




> Wiadomość napisana przez ChangBo Guo  w dniu 26.03.2018, 
> o godz. 14:15:
> 
> 
> 2018-03-22 16:12 GMT+08:00 Sławomir Kapłoński :
> Hi,
> 
> I took care of implementation of [1] in Neutron and I have couple questions 
> to about this goal.
> 
> 1. Should we only change "restart_method" to mutate as is described in [2] ? 
> I did already something like that in [3] - is it what is expected?
> 
>  Yes , let's the only  thing.  we need test if that if it works .
> 
> 2. How I can check if this change is fine and config option are mutable 
> exactly? For now when I change any config option for any of neutron agents 
> and send SIGHUP to it it is in fact "restarted" and config is reloaded even 
> with this old restart method.
> 
> good question, we indeed thought this question when we proposal  the 
> goal.  But It seems difficult to test  that consuming projects like Neutron 
> automatically.
> 
> 3. Should we add any automatic tests for such change also? Any examples of 
> such tests in other projects maybe?
>  There is no example for tests now, we only have some unit tests  in 
> oslo.service .
> 
> [1] 
> https://governance.openstack.org/tc/goals/rocky/enable-mutable-configuration.html
> [2] https://docs.openstack.org/oslo.config/latest/reference/mutable.html
> [3] https://review.openstack.org/#/c/554259/
> 
> —
> Best regards
> Slawek Kaplonski
> sla...@kaplonski.pl
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> --
> ChangBo Guo(gcb)
> Community Director @EasyStack
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-infra][openstack-zuul-jobs]Questions about playbook copy module

2018-04-06 Thread Xinni Ge
Hello there,

I have some questions about the value of parameter `dest` of copy module in
this file.

openstack-zuul-jobs/playbooks/xstatic/check-version.yaml
Line:6dest: xstatic_check_version.py

Ansible documents describe `dest` as  "Remote absolute path where the file
should be copied to".
(http://docs.ansible.com/ansible/devel/modules/copy_module.html#id2)

I am not quite familiar with ansible but maybe it could be `{{
zuul.executor.log_root
}}/openstack-zuul-jobs/playbooks/xstatic/check-version.yaml` or something
similar ?

Actually I ran into the problem trying to release a new xstatic package.
The release patch was merged but fail to execute the release job. Just
wondering whether or not it could be the reason of the failure.

I am not sure about how to debug this, or how to re-launch the release job.
I am very appreciate of it if anybody could kindly help me.


Best Regards,

Xinni Ge
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra][openstack-zuul-jobs]Questions about playbook copy module

2018-04-06 Thread Andreas Jaeger
On 2018-04-06 10:53, Xinni Ge wrote:
> Hello there,
> 
> I have some questions about the value of parameter `dest` of copy module
> in this file.
> 
> openstack-zuul-jobs/playbooks/xstatic/check-version.yaml
> Line:6        dest: xstatic_check_version.py
> 
> Ansible documents describe `dest` as  "Remote absolute path where the
> file should be copied to".
> (http://docs.ansible.com/ansible/devel/modules/copy_module.html#id2)
>  
> I am not quite familiar with ansible but maybe it could be `{{
> zuul.executor.log_root
> }}/openstack-zuul-jobs/playbooks/xstatic/check-version.yaml` or
> something similar ?
> 
> Actually I ran into the problem trying to release a new xstatic package.
> The release patch was merged but fail to execute the release job. Just
> wondering whether or not it could be the reason of the failure.

Could you share a link to the logs for the job that failed, please?

> I am not sure about how to debug this, or how to re-launch the release job.
> I am very appreciate of it if anybody could kindly help me.

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra][openstack-zuul-jobs]Questions about playbook copy module

2018-04-06 Thread Xinni Ge
Sorry, forgot to reply to the mail list.

On Fri, Apr 6, 2018 at 6:18 PM, Xinni Ge  wrote:

> Hi, Andreas.
>
> Thanks for reply. This is the link of log I am seeing.
> http://logs.openstack.org/39/39067dbc1dee99d227f8001595633b
> 5cc98cfc53/release/xstatic-check-version/9172297/ara-report/
>
>
> On Fri, Apr 6, 2018 at 5:59 PM, Andreas Jaeger  wrote:
>
>> On 2018-04-06 10:53, Xinni Ge wrote:
>> > Hello there,
>> >
>> > I have some questions about the value of parameter `dest` of copy module
>> > in this file.
>> >
>> > openstack-zuul-jobs/playbooks/xstatic/check-version.yaml
>> > Line:6dest: xstatic_check_version.py
>> >
>> > Ansible documents describe `dest` as  "Remote absolute path where the
>> > file should be copied to".
>> > (http://docs.ansible.com/ansible/devel/modules/copy_module.html#id2)
>> >
>> > I am not quite familiar with ansible but maybe it could be `{{
>> > zuul.executor.log_root
>> > }}/openstack-zuul-jobs/playbooks/xstatic/check-version.yaml` or
>> > something similar ?
>> >
>> > Actually I ran into the problem trying to release a new xstatic package.
>> > The release patch was merged but fail to execute the release job. Just
>> > wondering whether or not it could be the reason of the failure.
>>
>> Could you share a link to the logs for the job that failed, please?
>>
>> > I am not sure about how to debug this, or how to re-launch the release
>> job.
>> > I am very appreciate of it if anybody could kindly help me.
>>
>> Andreas
>> --
>>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>>HRB 21284 (AG Nürnberg)
>> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>>
>>
>
>
> --
> 葛馨霓 Xinni Ge
>



-- 
葛馨霓 Xinni Ge
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra][openstack-zuul-jobs]Questions about playbook copy module

2018-04-06 Thread Andreas Jaeger
On 2018-04-06 11:20, Xinni Ge wrote:
> Sorry, forgot to reply to the mail list.
> 
> On Fri, Apr 6, 2018 at 6:18 PM, Xinni Ge  > wrote:
> 
> Hi, Andreas.
> 
> Thanks for reply. This is the link of log I am seeing.
> 
> http://logs.openstack.org/39/39067dbc1dee99d227f8001595633b5cc98cfc53/release/xstatic-check-version/9172297/ara-report/
> 
> 
> 

thanks, your analysis is correct, seem we seldom release xstatic packages ;(

fix is at https://review.openstack.org/559300

Once that is merged, an infra-root can rerun the release job - please
ask on #openstack-infra IRC channel,

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release

2018-04-06 Thread Kashyap Chamarthy
On Thu, Apr 05, 2018 at 10:32:13PM +0200, Thomas Goirand wrote:

Hey Zigo, thanks for the detailed response; a couple of comments below.

[...]

> backport of libvirt/QEMU/libguestfs more in details
> ---
> 
> I already attempted the backports from Debian Buster to Stretch.
>
> All of the 3 components (libvirt, qemu & libguestfs) could be built
> without extra dependency, which is a very good thing.
> 
> - libvirt 4.1.0 compiled without issue, though the dh_install phase
> failed with this error:
> 
> dh_install: Cannot find (any matches for) "/usr/lib/*/wireshark/" (tried
> in "." and "debian/tmp")
> dh_install: libvirt-wireshark missing files: /usr/lib/*/wireshark/
> dh_install: missing files, aborting

That seems like a problem in the Debian packaging system, not in
libvirt.  I double-checked with the upstream folks, and the install
rules for Wireshark plugin doesn't have /*/ in there.

> - qemu 2.11 built perfectly with zero change.
> 
> - libguestfs 1.36.13 only needed to have fdisk replaced by util-linux as
> build-depends (fdisk is now a separate package in Buster).

Great.

Note: You don't even have to build the versions from 'Buster', which are
quite new.  Just the slightly more conservative libvirt 3.2.0 and QEMU
2.9.0 -- only if it's possbile.

[...]

> Conclusion:
> ---
> 
> If you don't absolutely need new features from libvirt 3.2.0 and 3.0.0
> is fine, please choose 3.0.0 as minimum.
> 
> If you don't absolutely need new features from qemu 2.9.0 and 2.8.0 is
> fine, please choose 2.8.0 as minimum.
> 
> If you don't absolutely need new features from libguestfs 1.36 and 1.34
> is fine, please choose 1.34 as minimum.
> 
> If you do need these new features, I'll do my best adapt. :)

Sure, can use the 3.0.0 (& QEMU 2.8.0), instead of 3.2.0, as we don't
want to "penalize" (that was never the intention) distros with slightly
older versions.

That said ... I just spent comparing the release notes of libvirt 3.0.0
and libvirt 3.2.0[1][2].  By using libvirt 3.2.0 and QEMU 2.9.0, Debian users
will be spared from a lot of critical bugs (see all the list in [3]) in
CPU comparision area.

[1] https://www.redhat.com/archives/libvirt-announce/2017-April/msg0.html
-- Release of libvirt-3.2.0
[2] https://www.redhat.com/archives/libvirt-announce/2017-January/msg3.html
--  Release of libvirt-3.0.0
[3] https://www.redhat.com/archives/libvir-list/2017-February/msg01295.html


[...]

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL][PTLs] [Community goal] Toggle the debug option at runtime

2018-04-06 Thread Sławek Kapłoński
Hi,

Thanks Akihiro for help. I added „neutron-dynamic-routing” task to this story 
and I will push patch for it soon.
There is still so many things that I need to learn about OpenStack and Neutron 
:)

—
Best regards
Slawek Kaplonski
sla...@kaplonski.pl




> Wiadomość napisana przez Akihiro Motoki  w dniu 
> 06.04.2018, o godz. 11:34:
> 
> 
> Hi Slawek,
> 
> 2018-04-06 17:38 GMT+09:00 Sławek Kapłoński :
> Hi,
> 
> One more question about implementation of this goal. Should we take care (and 
> add to story board [1]) projects like:
> 
> In my understanding, tasks in the storyboard story are prepared per project 
> team listed in the governance.
> IMHO, repositories which belong to a project team should be handled as a 
> single task.
> 
> The situations vary across repositories.
> 
> 
> openstack/neutron-lbaas
> 
> This should be covered by octavia team.
> 
> openstack/networking-cisco
> openstack/networking-dpm
> openstack/networking-infoblox
> openstack/networking-l2gw
> openstack/networking-lagopus
> 
> The above repos are not official repos.
> Maintainers of each repo can follow the community goal, but there is no need 
> to be tracked as the neutron team.
> 
> openstack/neutron-dynamic-routing
> 
> This repo is part of the neutron team. We, the neutron team need to cover 
> this.
> 
> FYI: The official repositories covered by the neutron team is available here.
> https://governance.openstack.org/tc/reference/projects/neutron.html
> 
> Thanks,
> Akihiro
> 
> 
> Which looks that should be probably also changed in some way. Or maybe list 
> of affected projects in [1] is „closed” and if some project is not there it 
> shouldn’t be changed to accomplish this community goal?
> 
> [1] https://storyboard.openstack.org/#!/story/2001545
> 
> —
> Best regards
> Slawek Kaplonski
> sla...@kaplonski.pl
> 
> 
> 
> 
> > Wiadomość napisana przez ChangBo Guo  w dniu 
> > 26.03.2018, o godz. 14:15:
> >
> >
> > 2018-03-22 16:12 GMT+08:00 Sławomir Kapłoński :
> > Hi,
> >
> > I took care of implementation of [1] in Neutron and I have couple questions 
> > to about this goal.
> >
> > 1. Should we only change "restart_method" to mutate as is described in [2] 
> > ? I did already something like that in [3] - is it what is expected?
> >
> >  Yes , let's the only  thing.  we need test if that if it works .
> >
> > 2. How I can check if this change is fine and config option are mutable 
> > exactly? For now when I change any config option for any of neutron agents 
> > and send SIGHUP to it it is in fact "restarted" and config is reloaded even 
> > with this old restart method.
> >
> > good question, we indeed thought this question when we proposal  the 
> > goal.  But It seems difficult to test  that consuming projects like Neutron 
> > automatically.
> >
> > 3. Should we add any automatic tests for such change also? Any examples of 
> > such tests in other projects maybe?
> >  There is no example for tests now, we only have some unit tests  in 
> > oslo.service .
> >
> > [1] 
> > https://governance.openstack.org/tc/goals/rocky/enable-mutable-configuration.html
> > [2] https://docs.openstack.org/oslo.config/latest/reference/mutable.html
> > [3] https://review.openstack.org/#/c/554259/
> >
> > —
> > Best regards
> > Slawek Kaplonski
> > sla...@kaplonski.pl
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > --
> > ChangBo Guo(gcb)
> > Community Director @EasyStack
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][qa][requirements] Pip 10 is on the way

2018-04-06 Thread Monty Taylor

On 04/05/2018 07:39 PM, Paul Belanger wrote:

On Thu, Apr 05, 2018 at 01:27:13PM -0700, Clark Boylan wrote:

On Mon, Apr 2, 2018, at 9:13 AM, Clark Boylan wrote:

On Mon, Apr 2, 2018, at 8:06 AM, Matthew Thode wrote:

On 18-03-31 15:00:27, Jeremy Stanley wrote:

According to a notice[1] posted to the pypa-announce and
distutils-sig mailing lists, pip 10.0.0.b1 is on PyPI now and 10.0.0
is expected to be released in two weeks (over the April 14/15
weekend). We know it's at least going to start breaking[2] DevStack
and we need to come up with a plan for addressing that, but we don't
know how much more widespread the problem might end up being so
encourage everyone to try it out now where they can.



I'd like to suggest locking down pip/setuptools/wheel like openstack
ansible is doing in
https://github.com/openstack/openstack-ansible/blob/master/global-requirement-pins.txt

We could maintain it as a separate constraints file (or infra could
maintian it, doesn't mater).  The file would only be used for the
initial get-pip install.


In the past we've done our best to avoid pinning these tools because 1)
we've told people they should use latest for openstack to work and 2) it
is really difficult to actually control what versions of these tools end
up on your systems if not latest.

I would strongly push towards addressing the distutils package deletion
problem that we've run into with pip10 instead. One of the approaches
thrown out that pabelanger is working on is to use a common virtualenv
for devstack and avoid the system package conflict entirely.


I was mistaken and pabelanger was working to get devstack's USE_VENV option working which 
installs each service (if the service supports it) into its own virtualenv. There are two 
big drawbacks to this. This first is that we would lose coinstallation of all the 
openstack services which is one way we ensure they all work together at the end of the 
day. The second is that not all services in "base" devstack support USE_VENV 
and I doubt many plugins do either (neutron apparently doesn't?).


Yah, I agree your approach is the better, i just wanted to toggle what was
supported by default. However, it is pretty broken today.  I can't imagine
anybody actually using it, if so they must be carrying downstream patches.

If we think USE_VENV is valid use case, for per project VENV, I suggest we
continue to fix it and update neutron to support it.  Otherwise, we maybe should
rip and replace it.


I'd vote for ripping it out.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-04-06 Thread Matthew Booth
On 6 April 2018 at 09:31, Gorka Eguileor  wrote:
> On 05/04, Matt Riedemann wrote:
>> On 4/5/2018 3:15 AM, Gorka Eguileor wrote:
>> > But just to be clear, Nova will have to initialize the connection with
>> > the re-imagined volume and attach it again to the node, as in all cases
>> > (except when defaulting to downloading the image and dd-ing it to the
>> > volume) the result will be a new volume in the backend.
>>
>> Yeah I think I pointed this out earlier in this thread on what I thought the
>> steps would be on the nova side with respect to creating a new empty
>> attachment to keep the volume 'reserved' while we delete the old attachment,
>> re-image the volume, and then update the volume attachment for the new
>> connection. I think that would be similar to how shelve and unshelve works
>> in nova.
>>
>> Would this really require a swap volume call from Cinder? I'd hope not since
>> swap volume in itself is a pretty gross operation on the nova side.
>>
>> --
>>
>> Thanks,
>>
>> Matt
>>
>
> Hi Matt,
>
> Yes, it will require a volume swap, with the worst case scenario
> exception where we dd the image into the volume.

I think you're talking at cross purposes here: this won't require a
swap volume. Apart from anything else, swap volume only works on an
attached volume, and as previously discussed Nova will detach and
re-attach.

Gorka, the Nova api Matt is referring to is called volume update
externally. It's the operation required for live migrating an attached
volume between backends. It's called swap volume internally in Nova.

Matt

>
> In the same way that anyone would expect a re-imaging preserving the
> volume id, one would also expect it to behave like creating a new volume
> from the same image: be as fast and take up as much space on the
> backend.
>
> And to do so we have to use existing optimized mechanisms that will only
> work when creating a new volume.
>
> The alternative would be to have the worst case scenario as the default
> (attach and dd the image) and make *ALL* Cinder drivers implement the
> optimized mechanism where they can efficiently re-imagine a volume.  I
> can't talk for the Cinder team, but I for one would oppose this
> alternative.
>
> Cheers,
> Gorka.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG

Phone: +442070094448 (UK)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects

2018-04-06 Thread Doug Hellmann
Excerpts from super user's message of 2018-04-06 17:10:32 +0900:
> Hope you fix this soon, there are many patches depend on the 'match the
> minimum version' problem which causes requirements-check fail.

The problem is with *those patches* and not the check.

I've been trying to update some, but my time has been limited this week
for personal reasons. I encourage project teams to run the script I
provided or edit their lower-constraints.txt file by hand to fix the
issues.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects

2018-04-06 Thread super user
Hope you fix this soon, there are many patches depend on the 'match the
minimum version' problem which causes requirements-check fail.

On Wed, Apr 4, 2018 at 10:58 PM, Doug Hellmann 
wrote:

> Excerpts from Doug Hellmann's message of 2018-03-28 18:53:03 -0400:
> >
> > Because we had some communication issues and did a few steps out
> > of order, when this patch lands projects that have approved
> > bot-proposed requirements updates may find that their requirements
> > and lower-constraints files no longer match, which may lead to job
> > failures. It should be easy enough to fix the problems by making
> > the values in the constraints files match the values in the
> > requirements files (by editing either set of files, depending on
> > what is appropriate). I apologize for any inconvenience this causes.
>
> In part because of this, and in part because of some issues calculating
> the initial set of lower-constraints, we have several projects where
> their lower-constraints don't match the lower bounds in the requirements
> file(s). Now that the check job has been updated with the new rules,
> this is preventing us from landing the patches to add the
> lower-constraints test job (so those rules are working!).
>
> I've prepared a script to help fix up the lower-constraints.txt
> based on values in requirements.txt and test-requirements.txt.
> That's not everything, but it should make it easier to fix the rest.
>
> See https://review.openstack.org/#/c/558610/ for the script. I'll work
> on those pep8 errors later today so we can hopefully land it soon, but
> in the mean time you'll need to check out that commit and follow the
> instructions for setting up a virtualenv to run the script.
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci] use of tags in launchpad bugs

2018-04-06 Thread Jiří Stránský

On 5.4.2018 21:04, Alex Schultz wrote:

On Thu, Apr 5, 2018 at 12:55 PM, Wesley Hayutin  wrote:

FYI...

This is news to me so thanks to Emilien for pointing it out [1].
There are official tags for tripleo launchpad bugs.  Personally, I like what
I've seen recently with some extra tags as they could be helpful in finding
the history of particular issues.
So hypothetically would it be "wrong" to create an official tag for each
featureset config number upstream.  I ask because that is adding a lot of
tags but also serves as a good test case for what is good/bad use of tags.



We list official tags over in the specs repo[0].   That being said as
we investigate switching over to storyboard, we'll probably want to
revisit tags as they will have to be used more to replace some of the
functionality we had with launchpad (e.g. milestones).  You could
always add the tags without being an official tag. I'm not sure I
would really want all the featuresets as tags.  I'd rather see us
actually figure out what component is actually failing than relying on
a featureset (and the Rosetta stone for decoding featuresets to
functionality[1]).


We could also use both alongside. Component-based tags better relate to 
the actual root cause of the bug, while featureset-based tags are useful 
in relation to CI.


E.g. "I see fs037 failing, i wonder if anyone already reported a bug for 
it" -- if the reporter tagged the bug, it would be really easy to figure 
out the answer.


This might also again bring up the question of better job names to allow 
easier mapping to featuresets. IMO:


tripleo-ci-centos-7-containers-multinode  -- not great
tripleo-ci-centos-7-featureset010  -- not great
tripleo-ci-centos-7-containers-mn-fs010  -- *happy face*

Jirka




Thanks,
-Alex


[0] 
http://git.openstack.org/cgit/openstack/tripleo-specs/tree/specs/policy/bug-tagging.rst#n30
[1] 
https://git.openstack.org/cgit/openstack/tripleo-quickstart/tree/doc/source/feature-configuration.rst#n21

Thanks

[1] https://bugs.launchpad.net/tripleo/+manage-official-tags

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL][PTLs] [Community goal] Toggle the debug option at runtime

2018-04-06 Thread Akihiro Motoki
Hi Slawek,

2018-04-06 17:38 GMT+09:00 Sławek Kapłoński :

> Hi,
>
> One more question about implementation of this goal. Should we take care
> (and add to story board [1]) projects like:
>

In my understanding, tasks in the storyboard story are prepared per project
team listed in the governance.
IMHO, repositories which belong to a project team should be handled as a
single task.

The situations vary across repositories.


> openstack/neutron-lbaas
>

This should be covered by octavia team.


> openstack/networking-cisco
> openstack/networking-dpm
> openstack/networking-infoblox
> openstack/networking-l2gw
> openstack/networking-lagopus
>

The above repos are not official repos.
Maintainers of each repo can follow the community goal, but there is no
need to be tracked as the neutron team.


> openstack/neutron-dynamic-routing
>

This repo is part of the neutron team. We, the neutron team need to cover
this.

FYI: The official repositories covered by the neutron team is available
here.
https://governance.openstack.org/tc/reference/projects/neutron.html

Thanks,
Akihiro


>
> Which looks that should be probably also changed in some way. Or maybe
> list of affected projects in [1] is „closed” and if some project is not
> there it shouldn’t be changed to accomplish this community goal?
>

> [1] https://storyboard.openstack.org/#!/story/2001545
>
> —
> Best regards
> Slawek Kaplonski
> sla...@kaplonski.pl
>
>
>
>
> > Wiadomość napisana przez ChangBo Guo  w dniu
> 26.03.2018, o godz. 14:15:
> >
> >
> > 2018-03-22 16:12 GMT+08:00 Sławomir Kapłoński :
> > Hi,
> >
> > I took care of implementation of [1] in Neutron and I have couple
> questions to about this goal.
> >
> > 1. Should we only change "restart_method" to mutate as is described in
> [2] ? I did already something like that in [3] - is it what is expected?
> >
> >  Yes , let's the only  thing.  we need test if that if it works .
> >
> > 2. How I can check if this change is fine and config option are mutable
> exactly? For now when I change any config option for any of neutron agents
> and send SIGHUP to it it is in fact "restarted" and config is reloaded even
> with this old restart method.
> >
> > good question, we indeed thought this question when we proposal  the
> goal.  But It seems difficult to test  that consuming projects like Neutron
> automatically.
> >
> > 3. Should we add any automatic tests for such change also? Any examples
> of such tests in other projects maybe?
> >  There is no example for tests now, we only have some unit tests  in
> oslo.service .
> >
> > [1] https://governance.openstack.org/tc/goals/rocky/enable-
> mutable-configuration.html
> > [2] https://docs.openstack.org/oslo.config/latest/reference/mutable.html
> > [3] https://review.openstack.org/#/c/554259/
> >
> > —
> > Best regards
> > Slawek Kaplonski
> > sla...@kaplonski.pl
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > --
> > ChangBo Guo(gcb)
> > Community Director @EasyStack
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Asking for ask.openstack.org

2018-04-06 Thread Thierry Carrez
Zane Bitter wrote:
> On 05/04/18 00:12, Ian Wienand wrote:
>> On 04/05/2018 10:23 AM, Zane Bitter wrote:
>>> On 04/04/18 17:26, Jimmy McArthur wrote:
>>> Here's the thing: email alerts. They're broken.
>>
>> This is the type of thing we can fix if we know about it ... I will
>> contact you off-list because the last email to what I presume is you
>> went to an address that isn't what you've sent from here, but it was
>> accepted by the remote end.
> 
> Yeah, my mails get proxied through a fedora project address. I am
> getting them now though (since the SW update in January 2017 - and even
> before that I did get notifications if somebody @'d me). The issue is
> the content is not filtered by subscribed tags according to the
> preferences I have set, so they're useless for keeping up with my areas
> of interest.
> 
> It's not just a mail delivery problem, and I guarantee it's not just me.
> It's a bug somewhere in StackExchange itself.

Yes I can confirm email alerts are broken. I currently receive a weekly
digest about "ceilometer", "vip", "api", "nova", "openstack" tags while
I'm subscribed to "release" and "rootwrap". It's as if I received
someone else's email alerts...

(Software is not StackExchange, it's AskBot).

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra][openstack-zuul-jobs]Questions about playbook copy module

2018-04-06 Thread Xinni Ge
Thank you very much! I will follow up via irc.

2018年4月6日(金) 18:34 Andreas Jaeger :

> On 2018-04-06 11:20, Xinni Ge wrote:
> > Sorry, forgot to reply to the mail list.
> >
> > On Fri, Apr 6, 2018 at 6:18 PM, Xinni Ge  > > wrote:
> >
> > Hi, Andreas.
> >
> > Thanks for reply. This is the link of log I am seeing.
> >
> http://logs.openstack.org/39/39067dbc1dee99d227f8001595633b5cc98cfc53/release/xstatic-check-version/9172297/ara-report/
> > <
> http://logs.openstack.org/39/39067dbc1dee99d227f8001595633b5cc98cfc53/release/xstatic-check-version/9172297/ara-report/
> >
> >
>
> thanks, your analysis is correct, seem we seldom release xstatic packages
> ;(
>
> fix is at https://review.openstack.org/559300
>
> Once that is merged, an infra-root can rerun the release job - please
> ask on #openstack-infra IRC channel,
>
> Andreas
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
> --
Best Regards,
Xinni Ge
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-04-06 Thread Gorka Eguileor
On 05/04, Matt Riedemann wrote:
> On 4/5/2018 3:15 AM, Gorka Eguileor wrote:
> > But just to be clear, Nova will have to initialize the connection with
> > the re-imagined volume and attach it again to the node, as in all cases
> > (except when defaulting to downloading the image and dd-ing it to the
> > volume) the result will be a new volume in the backend.
>
> Yeah I think I pointed this out earlier in this thread on what I thought the
> steps would be on the nova side with respect to creating a new empty
> attachment to keep the volume 'reserved' while we delete the old attachment,
> re-image the volume, and then update the volume attachment for the new
> connection. I think that would be similar to how shelve and unshelve works
> in nova.
>
> Would this really require a swap volume call from Cinder? I'd hope not since
> swap volume in itself is a pretty gross operation on the nova side.
>
> --
>
> Thanks,
>
> Matt
>

Hi Matt,

Yes, it will require a volume swap, with the worst case scenario
exception where we dd the image into the volume.

In the same way that anyone would expect a re-imaging preserving the
volume id, one would also expect it to behave like creating a new volume
from the same image: be as fast and take up as much space on the
backend.

And to do so we have to use existing optimized mechanisms that will only
work when creating a new volume.

The alternative would be to have the worst case scenario as the default
(attach and dd the image) and make *ALL* Cinder drivers implement the
optimized mechanism where they can efficiently re-imagine a volume.  I
can't talk for the Cinder team, but I for one would oppose this
alternative.

Cheers,
Gorka.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Technical Committee Status update, April 6th

2018-04-06 Thread Thierry Carrez
Hi!

This is the weekly summary of Technical Committee initiatives. You can
find the full list of currently-considered changes at:
https://wiki.openstack.org/wiki/Technical_Committee_Tracker

We also track TC objectives for the cycle using StoryBoard at:
https://storyboard.openstack.org/#!/project/923


== Recently-approved changes ==

* Removed repository: puppet-ganesha


== Voting in progress ==

We are still missing a couple of votes on the proposal to set the
expectation early on that official projects will have to drop direct
tagging (or branching) rights in their Gerrit ACLs once they are made
official, as those actions will be handled by the Release Management
team through the openstack/releases repository. This will likely be
approved early next week, so please post your concerns on the review if
you have any:

https://review.openstack.org/557737


== Under discussion ==

We got lots of replies and comments on the thread and the review
proposing the split of the kolla-kubernetes deliverable out of the Kolla
team. Discussion has now moved to reviewing the deliverables currently
regrouped under the Kolla team, and considering whether the current
grouping is a feature or a bug. If you have an opinion on that, please
chime in on the review or the ML thread:

https://review.openstack.org/#/c/552531/
http://lists.openstack.org/pipermail/openstack-dev/2018-March/128822.html

The discussion is also still on-going the Adjutant project team
addition. The main concerns raised are: (1) presenting Adjutant as an
"API factory" around any business logic raises interoperability fears.
We don't really want an official "OpenStack service" with an open-ended
API or set of APIs; and (2) there is concern that some of the "core
plugins" could actually be implemented in existing projects, and that
Adjutant is working around the pain of landing those features in the
(set of) projects where they belong by creating a whole-new project to
land them faster. You can jump in the discussion here:

https://review.openstack.org/#/c/553643/

The last open discussion is around a proposed tag to track which
deliverables implemented a lower dependency bounds check voting test
job. After discussion at the last TC office hour, it might be abandoned
in favor of making it a community goal for the Stein cycle and then a
general expectation for projects using global requirements. Please see:

https://review.openstack.org/557501


== TC member actions/focus/discussions for the coming week(s) ==

For the coming week we'll confirm which topics we want to propose for
the Forum in Vancouver, and file them on forumtopics.openstack.org
before the April 15 deadline. There is still time to propose some at:

https://etherpad.openstack.org/p/YVR-forum-TC-sessions

The election season will start on April 10 with nominations for
candidates for the 7 open seats.

I also expect debate to continue around the three proposals under
discussion.


== Office hours ==

To be more inclusive of all timezones and more mindful of people for
which English is not the primary language, the Technical Committee
dropped its dependency on weekly meetings. So that you can still get
hold of TC members on IRC, we instituted a series of office hours on
#openstack-tc:

* 09:00 UTC on Tuesdays
* 01:00 UTC on Wednesdays
* 15:00 UTC on Thursdays

Feel free to add your own office hour conversation starter at:
https://etherpad.openstack.org/p/tc-office-hour-conversation-starters

Cheers,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal: The OpenStack Client Library Guide

2018-04-06 Thread Dmitry Tantsur

Hi Adrian,

Thanks for starting this discussion. I'm adding openstack-sigs ML, please keep 
it in the loop. We in API SIG are interested in providing guidance on not only 
writing OpenStack APIs, but also consuming them. For example, we have merged a 
guideline on consuming API versions: 
http://specs.openstack.org/openstack/api-wg/guidelines/sdk-exposing-microversions.html


More inline.

On 04/06/2018 05:55 AM, Adrian Turjak wrote:

Hello fellow OpenStackers,

As some of you have probably heard me rant, I've been thinking about how
to better solve the problem with various tools that support OpenStack or
are meant to be OpenStack clients/tools which don't always work as
expected by those of us directly in the community.

Mostly around things like auth and variable name conventions, and things
which often there should really be consistency and overlap.

The example that most recently triggered this discussion was how
OpenStackClient (and os-client-config) supports certain elements of
clouds.yaml and ENVVAR config, while Terraform supports it differently.
Both you'd often run on the cli and often both in the same terminal, so
it is always weird when certain auth and scoping values don't work the
same. This is being worked on, but little problems like this an an
ongoing problem.

The proposal, write an authoritative guide/spec on the basics of
implementing a client library or tool for any given language that talks
to OpenStack.

Elements we ought to cover:
- How all the various auth methods in Keystone work, how the whole authn
and authz process works with Keystone, and how to actually use it to do
what you want.


Yes please!


- What common client configuration options exist and how they work
(common variable names, ENVVARs, clouds.yaml), with something like
common ENVVARs documented and a list maintained so there is one
definitive source for what to expect people to be using.


Even bigger YES


- Per project guides on how the API might act that helps facilitate
starting to write code against it beyond just the API reference, and
examples of what to expect. Not exactly a duplicate of the API ref, but
more a 'common pitfalls and confusing elements to be ware of' section
that builds on the API ref of each project.


Oh yeah, esp. what to be mindful of when writing an SDK in a statically typed 
language (I had quite some fun with rust-openstack, I guess Terraform had 
similar issues).




There are likely other things we want to include, and we need to work
out what those are, but ideally this should be a new documentation
focused project which will result in useful guide on what someone needs
to take any programming language, and write a library that works as we
expect it should against OpenStack. Such a guide would also help any
existing libraries ensure they themselves do fully understand and use
the OpenStack auth and service APIs as expected. It should also help to
ensure programmers working across multiple languages and systems have a
much easier time interacting with all the various libraries they might
touch.

A lot of this knowledge exists, but it's hard to parse and not well
documented. We have reference implementations of it all in the likes of
OpenStackClient, Keystoneauth1, and the OpenStackSDK itself (which
os-client-config is now a part of), but what we need is a language
agnostic guide rather than the assumption that people will read the code
of our official projects. Even the API ref itself isn't entirely helpful
since in a lot of cases it only covers the most basic of examples for
each API.

There appears to be interest in something like this, so lets start with
a mailing list discussion, and potentially turn it into something more
official if this leads anywhere useful. :)


Count me in :)



Cheers,
Adrian


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] uncapping eventlet

2018-04-06 Thread Jens Harbott
2018-04-05 19:26 GMT+00:00 Matthew Thode :
> On 18-04-05 20:11:04, Graham Hayes wrote:
>> On 05/04/18 16:47, Matthew Thode wrote:
>> > eventlet-0.22.1 has been out for a while now, we should try and use it.
>> > Going to be fun times.
>> >
>> > I have a review projects can depend upon if they wish to test.
>> > https://review.openstack.org/533021
>>
>> It looks like we may have an issue with oslo.service -
>> https://review.openstack.org/#/c/559144/ is failing gates.
>>
>> Also - what is the dance for this to get merged? It doesn't look like we
>> can merge this while oslo.service has the old requirement restrictions.
>>
>
> The dance is as follows.
>
> 0. provide review for projects to test new eventlet version
>projects using eventlet should make backwards compat code changes at
>this time.

But this step is currently failing. Keystone doesn't even start when
eventlet-0.22.1 is installed, because loading oslo.service fails with
its pkg definition still requiring the capped eventlet:

http://logs.openstack.org/21/533021/4/check/legacy-requirements-integration-dsvm/7f7c3a8/logs/screen-keystone.txt.gz#_Apr_05_16_11_27_748482

So it looks like we need to have an uncapped release of oslo.service
before we can proceed here.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Vitrage] New proposal for analysis.

2018-04-06 Thread MinWookKim
Hello Ifat,

 

If possible, could i write a blueprint based on what we discussed?
(architecture, specs)

 

After checking the blueprint, it would be better to proceed with specific
updates on the various issues.

what do you think?

 

Thanks.

 

Best regards,

Minwook.

From: MinWookKim [mailto:delightw...@ssu.ac.kr] 
Sent: Thursday, April 5, 2018 10:53 AM
To: 'OpenStack Development Mailing List (not for usage questions)'
Subject: RE: [openstack-dev] [Vitrage] New proposal for analysis.

 

Hello Ifat,

 

Thanks for the good comments.

 

It was very helpful.

 

As you said, I tested for std.ssh, and I was able to get much better
results.

 

I am confident that this is what I want.

 

We can use std.ssh to provide convenience to users with a much more
efficient way to configure shell scripts / monitoring agent automation(for
Zabbix history,etc) / other commands.

 

In addition, std_actions.py contained a number of features that could be
used for this proposal (such as HTTP).

 

So if we actively use and utilize the actions in std_actions.py, we might
be able to construct neat code without the duplicate functionality that you
worried about.

 

It has been a great help.

 

In addition, I also agree that Vitrage action is required for Mistral.

 

If possible, I might be able to do that in the future.(ASAP)

 

Thank you.

 

Best regards,

Minwook.

 

From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.a...@nokia.com] 
Sent: Wednesday, April 4, 2018 4:21 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis.

 

Hi Minwook,

 

I discussed this issue with a Mistral contributor. 

Mistral has a long list of actions that can be used. Specifically, you can
use the std.ssh action to execute shell scripts.

 

Some useful commands:

 

mistral action-list

mistral action-get 

 

I’m not sure about the output of the std.ssh, and whether you can get it
from the action. I suggest you try it and see how it works.

The action is implemented here:
https://github.com/openstack/mistral/blob/master/mistral/actions/std_actions
.py 

 

If std.ssh does not suit your needs, you also have an option to implement
and run your own action in Mistral (either as an ssh action or as a python
code). 

And BTW, it is not related to your current use case, but we can also add
Vitrage actions to Mistral, so the user can access Vitrage information (get
topology, get alarms) from Mistral workflows. 

 

Best regards,

Ifat

 

 

From: MinWookKim  >
Reply-To: "OpenStack Development Mailing List (not for usage questions)"
 >
Date: Tuesday, 3 April 2018 at 15:19
To: "'OpenStack Development Mailing List (not for usage questions)'"
 >
Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis.

 

Hello Ifat,

 

Thanks for your reply.

 

Your comments have been a great help to the proposal.  (sorry, I did not
think we could use Mistral).

 

If we use the Mistral workflow for the proposal, we can get better results
(we can get good results on performance and code conciseness).

 

Also, if we use the Mistral workflow, we do not need to write any
unnecessary code.

 

Since I don't know about mistral yet, I think it would be better to do the
most efficient design including mistral after grasping it.

 

If we run a check through a Mistral workflow, how about providing users
with a choice of tools that have the capability to perform checks?

 

We can get the results of the check through the Mistral and tools, but I
think we need the least functionality to manage them. What do you think?

 

I attached a picture of the actual UI that I simply implemented. I hope it
helps you understand. (The parameter and content have no meaning and are a
simple example.) : )

 

Thanks.

 

Best regards,

Minwook.

 

From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.a...@nokia.com] 
Sent: Tuesday, April 3, 2018 8:31 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis.

 

Hi Minwook,

 

Thanks for the explanation, I understand the reasons for not running these
checks on a regular basis in Zabbix or other monitoring tools. It makes
sense. However, I don’t want to re-invent the wheel and add to Vitrage
functionality that already exists in other projects. 

 

How about using Mistral for the purpose of manually running these extra
checks? If you prepare the script/agent in advance, as well as the Mistral
workflow, I believe that Mistral can successfully execute the check and
return the results. I’m not so sure about the UI part, we will have to
figure out how and where the user can see the output. But it will save a lot
of effort around managing the checks, running a new 

Re: [openstack-dev] Following the new PTI for document build, broken local builds

2018-04-06 Thread Petr Kovar
On Fri, 06 Apr 2018 15:52:46 +0100
Stephen Finucane  wrote:

> On Fri, 2018-04-06 at 08:02 -0500, Sean McGinnis wrote:
> > > > 
> > > > How can we enable warning_is_error in the gate with the new PTI? It's 
> > > > easy enough to add the -W flag in tox.ini for local builds, but as you 
> > > > say the tox job is never called in the gate. In the gate zuul checks 
> > > > for 
> > > > it in the [build_sphinx] section of setup.cfg:
> > > > 
> > > > https://git.openstack.org/cgit/openstack-infra/zuul-jobs/tree/roles/sphinx/library/sphinx_check_warning_is_error.pyLovel#n23
> > > > 
> > > > [...]
> > > 
> > > I'd be more in favour of changing the zuul job to build with the '-W'
> > > flag. To be honest, there is no good reason to not have this flag
> > > enabled. I'm not sure that will be a popular opinion though as it may
> > > break some projects' builds (correctly, but still).
> > > 
> > > I'll propose a patch against zuul-jobs and see what happens :)
> > > 
> > > Stephen
> > > 
> > 
> > I am in favor of this too. We will probably need to give some teams some 
> > time
> > to get warnings fixed though. I haven't done any kind of extensive audit of
> > projects, but from a few I looked through, there are definitely a few that 
> > are
> > not erroring on warnings and are likely to be blocked if we suddenly flipped
> > the switch and errored on those.
> >
> > This is a legitimate issue though. In Cinder we had -W in the tox docs job, 
> > but
> > since that is no longer being enforced in the gate, running "tox -e docs" 
> > from
> > a fresh clone of master was failing. We really do need some way to enforce 
> > this
> > so things like that do not happen.
> 
> This. While forcing work on teams to do busywork is undeniably A Very
> Bad Thing (TM), I do think the longer we leave this, the worse it'll
> get. The zuul-jobs [1] patch will probably introduce some pain for
> projects but it seems like inevitable pain and we're in the right part
> of the cycle in which to do something like this. I'd be willing to help
> projects fix issues they encounter, which I expect will be minimal for
> most projects.

I too think enforcing -W is the way to go, so count me in for the
broken docs build help.

Thanks for pushing this forward!

Cheers,
pk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [OVN] Tempest API / Scenario tests and OVN metadata

2018-04-06 Thread Daniel Alvarez Sanchez
Hi,

Thanks Lucas for writing this down.

On Thu, Apr 5, 2018 at 11:35 AM, Lucas Alvares Gomes 
wrote:

> Hi,
>
> The tests below are failing in the tempest API / Scenario job that
> runs in the networking-ovn gate (non-voting):
>
> neutron_tempest_plugin.api.admin.test_quotas_negative.
> QuotasAdminNegativeTestJSON.test_create_port_when_quotas_is_full
> neutron_tempest_plugin.api.test_routers.RoutersIpV6Test.
> test_router_interface_status
> neutron_tempest_plugin.api.test_routers.RoutersTest.test_
> router_interface_status
> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_
> subnet_from_pool_with_prefixlen
> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_
> subnet_from_pool_with_quota
> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_
> subnet_from_pool_with_subnet_cidr
>
> Digging a bit into it I noticed that with the exception of the two
> "test_router_interface_status" (ipv6 and ipv4) all other tests are
> failing because the way metadata works in networking-ovn.
>
> Taking the "test_create_port_when_quotas_is_full" as an example. The
> reason why it fails is because when the OVN metadata is enabled,
> networking-ovn will metadata port at the moment a network is created
> [0] and that will already fulfill the quota limit set by that test
> [1].
>
> That port will also allocate an IP from the subnet which will cause
> the rest of the tests to fail with a "No more IP addresses available
> on network ..." error.
>

With ML2/OVS we would run into the same Quota problem if DHCP would be
enabled for the created subnets. This means that if we modify the current
tests
to enable DHCP on them and we account this extra port it would be valid for
all networking-ovn as well. Does it sound good or we still want to isolate
quotas?

>
> This is not very trivial to fix because:
>
> 1. Tempest should be backend agnostic. So, adding a conditional in the
> tempest test to check whether OVN is being used or not doesn't sound
> correct.
>
> 2. Creating a port to be used by the metadata agent is a core part of
> the design implementation for the metadata functionality [2]
>
> So, I'm sending this email to try to figure out what would be the best
> approach to deal with this problem and start working towards having
> that job to be voting in our gate. Here are some ideas:
>
> 1. Simple disable the tests that are affected by the metadata approach.
>
> 2. Disable metadata for the tempest API / Scenario tests (here's a
> test patch doing it [3])
>

IMHO, we don't want to do this as metadata is likely to be enabled in all
the
clouds either using ML2/OVS or OVN so it's good to keep exercising
this part.


>
> 3. Same as 1. but also create similar tempest tests specific for OVN
> somewhere else (in the networking-ovn tree?!)
>

As we discussed on IRC I'm keen on doing this instead of getting bits in
tempest to do different things depending on the backend used. Unless
we want to enable DHCP on the subnets that these tests create :)


> What you think would be the best way to workaround this problem, any
> other ideas ?
>
> As for the "test_router_interface_status" tests that are failing
> independent of the metadata, there's a bug reporting the problem here
> [4]. So we should just fix it.
>
> [0] https://github.com/openstack/networking-ovn/blob/
> f3f5257fc465bbf44d589cc16e9ef7781f6b5b1d/networking_ovn/
> common/ovn_client.py#L1154
> [1] https://github.com/openstack/neutron-tempest-plugin/blob/
> 35bf37d1830328d72606f9c790b270d4fda2b854/neutron_tempest_
> plugin/api/admin/test_quotas_negative.py#L66
> [2] https://docs.openstack.org/networking-ovn/latest/
> contributor/design/metadata_api.html#overview-of-proposed-approach
> [3] https://review.openstack.org/#/c/558792/
> [4] https://bugs.launchpad.net/networking-ovn/+bug/1713835
>
> Cheers,
> Lucas
>

Thanks,
Daniel

>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ci] use of tags in launchpad bugs

2018-04-06 Thread Rafael Folco
Thanks for the clarifications about official tags. I was the one creating
random/non-official tags for tripleo bugs.
Although this may be annoying for some people, it helped me while
ruckering/rovering CI to open unique bugs and avoid dups for the first
time(s).
There isn't a standard way of filing a bug. People open bugs using
different/non-standard wording in summary and description.
I just thought it was a good idea to tag featuresetXXX, ovb, branch, etc.,
so when somebody asks me if there is a bug for the job XYZ, the bug could
be found more easily.

Since sprint 10 ruck/rover started recording notes [1] and this helps to
keep track of the issues.
Perhaps the CI team could implement something on CI monitoring that links a
bug to the failing job(s), e.g:  [LP XX].

I'm doing a cleanup for the open bugs removing the non-official tags.

Thanks,

--Folco

[1] https://review.rdoproject.org/etherpad/p/ruckrover-sprint11


On Fri, Apr 6, 2018 at 6:09 AM, Jiří Stránský  wrote:

> On 5.4.2018 21:04, Alex Schultz wrote:
>
>> On Thu, Apr 5, 2018 at 12:55 PM, Wesley Hayutin 
>> wrote:
>>
>>> FYI...
>>>
>>> This is news to me so thanks to Emilien for pointing it out [1].
>>> There are official tags for tripleo launchpad bugs.  Personally, I like
>>> what
>>> I've seen recently with some extra tags as they could be helpful in
>>> finding
>>> the history of particular issues.
>>> So hypothetically would it be "wrong" to create an official tag for each
>>> featureset config number upstream.  I ask because that is adding a lot of
>>> tags but also serves as a good test case for what is good/bad use of
>>> tags.
>>>
>>>
>> We list official tags over in the specs repo[0].   That being said as
>> we investigate switching over to storyboard, we'll probably want to
>> revisit tags as they will have to be used more to replace some of the
>> functionality we had with launchpad (e.g. milestones).  You could
>> always add the tags without being an official tag. I'm not sure I
>> would really want all the featuresets as tags.  I'd rather see us
>> actually figure out what component is actually failing than relying on
>> a featureset (and the Rosetta stone for decoding featuresets to
>> functionality[1]).
>>
>
> We could also use both alongside. Component-based tags better relate to
> the actual root cause of the bug, while featureset-based tags are useful in
> relation to CI.
>
> E.g. "I see fs037 failing, i wonder if anyone already reported a bug for
> it" -- if the reporter tagged the bug, it would be really easy to figure
> out the answer.
>
> This might also again bring up the question of better job names to allow
> easier mapping to featuresets. IMO:
>
> tripleo-ci-centos-7-containers-multinode  -- not great
> tripleo-ci-centos-7-featureset010  -- not great
> tripleo-ci-centos-7-containers-mn-fs010  -- *happy face*
>
> Jirka
>
>
>
>>
>> Thanks,
>> -Alex
>>
>>
>> [0] http://git.openstack.org/cgit/openstack/tripleo-specs/tree/s
>> pecs/policy/bug-tagging.rst#n30
>> [1] https://git.openstack.org/cgit/openstack/tripleo-quickstart/
>> tree/doc/source/feature-configuration.rst#n21
>>
>>> Thanks
>>>
>>> [1] https://bugs.launchpad.net/tripleo/+manage-official-tags
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Rafael Folco
Senior Software Engineer
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][nova] about re-image the volume

2018-04-06 Thread Matt Riedemann

On 4/6/2018 5:09 AM, Matthew Booth wrote:

I think you're talking at cross purposes here: this won't require a
swap volume. Apart from anything else, swap volume only works on an
attached volume, and as previously discussed Nova will detach and
re-attach.

Gorka, the Nova api Matt is referring to is called volume update
externally. It's the operation required for live migrating an attached
volume between backends. It's called swap volume internally in Nova.


Yeah I was hoping we were just having a misunderstanding of what 'swap 
volume' in nova is, which is the blockRebase for an already attached 
volume to the guest, called from cinder during a volume retype or migration.


As for the re-image thing, nova would be detaching the volume from the 
guest prior to calling the new cinder re-image API, and then re-attach 
to the guest afterward - similar to how shelve and unshelve work, and 
for that matter how rebuild works today with non-root volumes.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects

2018-04-06 Thread super user
I will help to update some.

On Fri, Apr 6, 2018 at 8:42 PM, Doug Hellmann  wrote:

> Excerpts from super user's message of 2018-04-06 17:10:32 +0900:
> > Hope you fix this soon, there are many patches depend on the 'match the
> > minimum version' problem which causes requirements-check fail.
>
> The problem is with *those patches* and not the check.
>
> I've been trying to update some, but my time has been limited this week
> for personal reasons. I encourage project teams to run the script I
> provided or edit their lower-constraints.txt file by hand to fix the
> issues.
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [OVN] Tempest API / Scenario tests and OVN metadata

2018-04-06 Thread Sławek Kapłoński
Hi,

I don’t know how networking-ovn is working but I have one question.


> Wiadomość napisana przez Daniel Alvarez Sanchez  w dniu 
> 06.04.2018, o godz. 15:30:
> 
> Hi,
> 
> Thanks Lucas for writing this down.
> 
> On Thu, Apr 5, 2018 at 11:35 AM, Lucas Alvares Gomes  
> wrote:
> Hi,
> 
> The tests below are failing in the tempest API / Scenario job that
> runs in the networking-ovn gate (non-voting):
> 
> neutron_tempest_plugin.api.admin.test_quotas_negative.QuotasAdminNegativeTestJSON.test_create_port_when_quotas_is_full
> neutron_tempest_plugin.api.test_routers.RoutersIpV6Test.test_router_interface_status
> neutron_tempest_plugin.api.test_routers.RoutersTest.test_router_interface_status
> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_prefixlen
> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_quota
> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_subnet_cidr
> 
> Digging a bit into it I noticed that with the exception of the two
> "test_router_interface_status" (ipv6 and ipv4) all other tests are
> failing because the way metadata works in networking-ovn.
> 
> Taking the "test_create_port_when_quotas_is_full" as an example. The
> reason why it fails is because when the OVN metadata is enabled,
> networking-ovn will metadata port at the moment a network is created
> [0] and that will already fulfill the quota limit set by that test
> [1].
> 
> That port will also allocate an IP from the subnet which will cause
> the rest of the tests to fail with a "No more IP addresses available
> on network ..." error.
> 
> With ML2/OVS we would run into the same Quota problem if DHCP would be
> enabled for the created subnets. This means that if we modify the current 
> tests
> to enable DHCP on them and we account this extra port it would be valid for
> all networking-ovn as well. Does it sound good or we still want to isolate 
> quotas?

If DHCP will be enabled for networking-ovn, will it use one more port also or 
not? If so then You will still have the same problem with DHCP as in ML2/OVS 
You will have one port created and for networking-ovn it will be 2 ports.
If it’s not like that then I think that this solution, with some comment in 
test code why DHCP is enabled should be good IMO.

> 
> This is not very trivial to fix because:
> 
> 1. Tempest should be backend agnostic. So, adding a conditional in the
> tempest test to check whether OVN is being used or not doesn't sound
> correct.
> 
> 2. Creating a port to be used by the metadata agent is a core part of
> the design implementation for the metadata functionality [2]
> 
> So, I'm sending this email to try to figure out what would be the best
> approach to deal with this problem and start working towards having
> that job to be voting in our gate. Here are some ideas:
> 
> 1. Simple disable the tests that are affected by the metadata approach.
> 
> 2. Disable metadata for the tempest API / Scenario tests (here's a
> test patch doing it [3])
> 
> IMHO, we don't want to do this as metadata is likely to be enabled in all the
> clouds either using ML2/OVS or OVN so it's good to keep exercising
> this part.
> 
> 
> 3. Same as 1. but also create similar tempest tests specific for OVN
> somewhere else (in the networking-ovn tree?!)
> 
> As we discussed on IRC I'm keen on doing this instead of getting bits in
> tempest to do different things depending on the backend used. Unless
> we want to enable DHCP on the subnets that these tests create :)
> 
> 
> What you think would be the best way to workaround this problem, any
> other ideas ?
> 
> As for the "test_router_interface_status" tests that are failing
> independent of the metadata, there's a bug reporting the problem here
> [4]. So we should just fix it.
> 
> [0] 
> https://github.com/openstack/networking-ovn/blob/f3f5257fc465bbf44d589cc16e9ef7781f6b5b1d/networking_ovn/common/ovn_client.py#L1154
> [1] 
> https://github.com/openstack/neutron-tempest-plugin/blob/35bf37d1830328d72606f9c790b270d4fda2b854/neutron_tempest_plugin/api/admin/test_quotas_negative.py#L66
> [2] 
> https://docs.openstack.org/networking-ovn/latest/contributor/design/metadata_api.html#overview-of-proposed-approach
> [3] https://review.openstack.org/#/c/558792/
> [4] https://bugs.launchpad.net/networking-ovn/+bug/1713835
> 
> Cheers,
> Lucas
> 
> Thanks,
> Daniel
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] Proposal: The OpenStack Client Library Guide

2018-04-06 Thread Ben Nemec
As someone who has dealt with this in the past[1], I can appreciate the 
complexities of trying to write software that can handle all the various 
auth options in OpenStack.  In my case I gave up and passed all of that 
off to os-client-config (which, to be fair, is probably what I should 
have done in the first place, but isn't an option for non-Python 
projects).  I'm not sure I can actually help with this since it didn't 
make any sense to me, but a big +1 to bringing some sanity to this whole 
thing.


1: 
http://blog.nemebean.com/content/creating-openstack-client-instances-python


On 04/05/2018 10:55 PM, Adrian Turjak wrote:

Hello fellow OpenStackers,

As some of you have probably heard me rant, I've been thinking about how
to better solve the problem with various tools that support OpenStack or
are meant to be OpenStack clients/tools which don't always work as
expected by those of us directly in the community.

Mostly around things like auth and variable name conventions, and things
which often there should really be consistency and overlap.

The example that most recently triggered this discussion was how
OpenStackClient (and os-client-config) supports certain elements of
clouds.yaml and ENVVAR config, while Terraform supports it differently.
Both you'd often run on the cli and often both in the same terminal, so
it is always weird when certain auth and scoping values don't work the
same. This is being worked on, but little problems like this an an
ongoing problem.

The proposal, write an authoritative guide/spec on the basics of
implementing a client library or tool for any given language that talks
to OpenStack.

Elements we ought to cover:
- How all the various auth methods in Keystone work, how the whole authn
and authz process works with Keystone, and how to actually use it to do
what you want.
- What common client configuration options exist and how they work
(common variable names, ENVVARs, clouds.yaml), with something like
common ENVVARs documented and a list maintained so there is one
definitive source for what to expect people to be using.
- Per project guides on how the API might act that helps facilitate
starting to write code against it beyond just the API reference, and
examples of what to expect. Not exactly a duplicate of the API ref, but
more a 'common pitfalls and confusing elements to be ware of' section
that builds on the API ref of each project.

There are likely other things we want to include, and we need to work
out what those are, but ideally this should be a new documentation
focused project which will result in useful guide on what someone needs
to take any programming language, and write a library that works as we
expect it should against OpenStack. Such a guide would also help any
existing libraries ensure they themselves do fully understand and use
the OpenStack auth and service APIs as expected. It should also help to
ensure programmers working across multiple languages and systems have a
much easier time interacting with all the various libraries they might
touch.

A lot of this knowledge exists, but it's hard to parse and not well
documented. We have reference implementations of it all in the likes of
OpenStackClient, Keystoneauth1, and the OpenStackSDK itself (which
os-client-config is now a part of), but what we need is a language
agnostic guide rather than the assumption that people will read the code
of our official projects. Even the API ref itself isn't entirely helpful
since in a lot of cases it only covers the most basic of examples for
each API.

There appears to be interest in something like this, so lets start with
a mailing list discussion, and potentially turn it into something more
official if this leads anywhere useful. :)

Cheers,
Adrian


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [zun] zun-api error

2018-04-06 Thread Hongbin Lu
Hi Murali,

It looks your zunclient was sending API requests to
http://10.11.142.2:9511/v1/services , which doesn't seem to be the right
API endpoint. According to the Keystone endpoint you configured, the API
endpoint of Zun should be http://10.11.142.2:9517/v1/services
 (it is on port 9517 instead of 9511).

What confused the zunclient is the endpoint's type you configured in
Keystone. Zun expects an endpoint of type "container" but it was configured
to be "zun-container" in your setup. I believe the error will be resolved
if you can update the Zun endpoint from type "zun-container" to type
"container". Please give it a try and let us know.

Best regards,
Hongbin

On Thu, Apr 5, 2018 at 7:27 PM, Murali B  wrote:

> Hi Hongbin,
>
> Thank you for your help
>
> As per the our discussion here is the output for my current api on pike. I
> am not sure which version of zun client  client  I should use for pike
>
> root@cluster3-2:~/python-zunclient# zun service-list
> ERROR: Not Acceptable (HTTP 406) (Request-ID: req-be69266e-b641-44b9-9739-
> 0c2d050f18b3)
> root@cluster3-2:~/python-zunclient# zun --debug service-list
> DEBUG (extension:180) found extension EntryPoint.parse('vitrage-keycloak
> = vitrageclient.auth:VitrageKeycloakLoader')
> DEBUG (extension:180) found extension EntryPoint.parse('vitrage-noauth =
> vitrageclient.auth:VitrageNoAuthLoader')
> DEBUG (extension:180) found extension EntryPoint.parse('noauth =
> cinderclient.contrib.noauth:CinderNoAuthLoader')
> DEBUG (extension:180) found extension EntryPoint.parse('v2token =
> keystoneauth1.loading._plugins.identity.v2:Token')
> DEBUG (extension:180) found extension EntryPoint.parse('none =
> keystoneauth1.loading._plugins.noauth:NoAuth')
> DEBUG (extension:180) found extension EntryPoint.parse('v3oauth1 =
> keystoneauth1.extras.oauth1._loading:V3OAuth1')
> DEBUG (extension:180) found extension EntryPoint.parse('admin_token =
> keystoneauth1.loading._plugins.admin_token:AdminToken')
> DEBUG (extension:180) found extension EntryPoint.parse('v3oidcauthcode =
> keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAuthorizationCode
> ')
> DEBUG (extension:180) found extension EntryPoint.parse('v2password =
> keystoneauth1.loading._plugins.identity.v2:Password')
> DEBUG (extension:180) found extension EntryPoint.parse('v3samlpassword =
> keystoneauth1.extras._saml2._loading:Saml2Password')
> DEBUG (extension:180) found extension EntryPoint.parse('v3password =
> keystoneauth1.loading._plugins.identity.v3:Password')
> DEBUG (extension:180) found extension EntryPoint.parse('v3adfspassword =
> keystoneauth1.extras._saml2._loading:ADFSPassword')
> DEBUG (extension:180) found extension EntryPoint.parse('v3oidcaccesstoken
> = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAccessToken')
> DEBUG (extension:180) found extension EntryPoint.parse('v3oidcpassword =
> keystoneauth1.loading._plugins.identity.v3:OpenIDConnectPassword')
> DEBUG (extension:180) found extension EntryPoint.parse('v3kerberos =
> keystoneauth1.extras.kerberos._loading:Kerberos')
> DEBUG (extension:180) found extension EntryPoint.parse('token =
> keystoneauth1.loading._plugins.identity.generic:Token')
> DEBUG (extension:180) found extension 
> EntryPoint.parse('v3oidcclientcredentials
> = keystoneauth1.loading._plugins.identity.v3:
> OpenIDConnectClientCredentials')
> DEBUG (extension:180) found extension EntryPoint.parse('v3tokenlessauth =
> keystoneauth1.loading._plugins.identity.v3:TokenlessAuth')
> DEBUG (extension:180) found extension EntryPoint.parse('v3token =
> keystoneauth1.loading._plugins.identity.v3:Token')
> DEBUG (extension:180) found extension EntryPoint.parse('v3totp =
> keystoneauth1.loading._plugins.identity.v3:TOTP')
> DEBUG (extension:180) found extension 
> EntryPoint.parse('v3applicationcredential
> = keystoneauth1.loading._plugins.identity.v3:ApplicationCredential')
> DEBUG (extension:180) found extension EntryPoint.parse('password =
> keystoneauth1.loading._plugins.identity.generic:Password')
> DEBUG (extension:180) found extension EntryPoint.parse('v3fedkerb =
> keystoneauth1.extras.kerberos._loading:MappedKerberos')
> DEBUG (extension:180) found extension EntryPoint.parse('v1password =
> swiftclient.authv1:PasswordLoader')
> DEBUG (extension:180) found extension EntryPoint.parse('token_endpoint =
> openstackclient.api.auth_plugin:TokenEndpoint')
> DEBUG (extension:180) found extension EntryPoint.parse('gnocchi-basic =
> gnocchiclient.auth:GnocchiBasicLoader')
> DEBUG (extension:180) found extension EntryPoint.parse('gnocchi-noauth =
> gnocchiclient.auth:GnocchiNoAuthLoader')
> DEBUG (extension:180) found extension EntryPoint.parse('aodh-noauth =
> aodhclient.noauth:AodhNoAuthLoader')
> DEBUG (session:372) REQ: curl -g -i -X GET http://ubuntu16:35357/v3 -H
> "Accept: application/json" -H "User-Agent: zun keystoneauth1/3.4.0
> python-requests/2.18.1 CPython/2.7.12"
> DEBUG (connectionpool:207) Starting 

Re: [openstack-dev] Following the new PTI for document build, broken local builds

2018-04-06 Thread Stephen Finucane
On Fri, 2018-04-06 at 08:02 -0500, Sean McGinnis wrote:
> > > 
> > > How can we enable warning_is_error in the gate with the new PTI? It's 
> > > easy enough to add the -W flag in tox.ini for local builds, but as you 
> > > say the tox job is never called in the gate. In the gate zuul checks for 
> > > it in the [build_sphinx] section of setup.cfg:
> > > 
> > > https://git.openstack.org/cgit/openstack-infra/zuul-jobs/tree/roles/sphinx/library/sphinx_check_warning_is_error.pyLovel#n23
> > > 
> > > [...]
> > 
> > I'd be more in favour of changing the zuul job to build with the '-W'
> > flag. To be honest, there is no good reason to not have this flag
> > enabled. I'm not sure that will be a popular opinion though as it may
> > break some projects' builds (correctly, but still).
> > 
> > I'll propose a patch against zuul-jobs and see what happens :)
> > 
> > Stephen
> > 
> 
> I am in favor of this too. We will probably need to give some teams some time
> to get warnings fixed though. I haven't done any kind of extensive audit of
> projects, but from a few I looked through, there are definitely a few that are
> not erroring on warnings and are likely to be blocked if we suddenly flipped
> the switch and errored on those.
>
> This is a legitimate issue though. In Cinder we had -W in the tox docs job, 
> but
> since that is no longer being enforced in the gate, running "tox -e docs" from
> a fresh clone of master was failing. We really do need some way to enforce 
> this
> so things like that do not happen.

This. While forcing work on teams to do busywork is undeniably A Very
Bad Thing (TM), I do think the longer we leave this, the worse it'll
get. The zuul-jobs [1] patch will probably introduce some pain for
projects but it seems like inevitable pain and we're in the right part
of the cycle in which to do something like this. I'd be willing to help
projects fix issues they encounter, which I expect will be minimal for
most projects.

Stephen

[1] https://review.openstack.org/559348

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] uncapping eventlet

2018-04-06 Thread Matthew Thode
On 18-04-06 09:41:07, Clark Boylan wrote:
> On Fri, Apr 6, 2018, at 9:34 AM, Matthew Thode wrote:
> > On 18-04-06 09:02:29, Jens Harbott wrote:
> > > 2018-04-05 19:26 GMT+00:00 Matthew Thode :
> > > > On 18-04-05 20:11:04, Graham Hayes wrote:
> > > >> On 05/04/18 16:47, Matthew Thode wrote:
> > > >> > eventlet-0.22.1 has been out for a while now, we should try and use 
> > > >> > it.
> > > >> > Going to be fun times.
> > > >> >
> > > >> > I have a review projects can depend upon if they wish to test.
> > > >> > https://review.openstack.org/533021
> > > >>
> > > >> It looks like we may have an issue with oslo.service -
> > > >> https://review.openstack.org/#/c/559144/ is failing gates.
> > > >>
> > > >> Also - what is the dance for this to get merged? It doesn't look like 
> > > >> we
> > > >> can merge this while oslo.service has the old requirement restrictions.
> > > >>
> > > >
> > > > The dance is as follows.
> > > >
> > > > 0. provide review for projects to test new eventlet version
> > > >projects using eventlet should make backwards compat code changes at
> > > >this time.
> > > 
> > > But this step is currently failing. Keystone doesn't even start when
> > > eventlet-0.22.1 is installed, because loading oslo.service fails with
> > > its pkg definition still requiring the capped eventlet:
> > > 
> > > http://logs.openstack.org/21/533021/4/check/legacy-requirements-integration-dsvm/7f7c3a8/logs/screen-keystone.txt.gz#_Apr_05_16_11_27_748482
> > > 
> > > So it looks like we need to have an uncapped release of oslo.service
> > > before we can proceed here.
> > > 
> > 
> > Ya, we may have to uncap and rely on upper-constraints to keep openstack
> > gate from falling over.  The new steps would be the following:
> 
> My understanding of our use of upper constraints was that this should 
> (almost) always be the case for (almost) all dependencies.  We should rely on 
> constraints instead of requirements caps. Capping libs like pbr or eventlet 
> and any other that is in use globally is incredibly difficult to work with 
> when you want to uncap it because you have to coordinate globally. Instead if 
> using constraints you just bump the constraint and are done.
> 
> It is probably worthwhile examining if we have any other deps in the 
> situation and proactively addressing them rather than waiting for when we 
> really need to fix them.
> 

That's constantly on our list of things to do.  In the past the only
time we've capped is when we know upstream is realeasing breaking
versions and we want to hold off for a cycle or until it's fixed.  It
also has the benefit of telling consumers/packagers about something
'hard breaking'.

networkx is next on the list...

-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra][openstack-zuul-jobs]Questions about playbook copy module

2018-04-06 Thread Clark Boylan
On Fri, Apr 6, 2018, at 2:32 AM, Andreas Jaeger wrote:
> On 2018-04-06 11:20, Xinni Ge wrote:
> > Sorry, forgot to reply to the mail list.
> > 
> > On Fri, Apr 6, 2018 at 6:18 PM, Xinni Ge  > > wrote:
> > 
> > Hi, Andreas.
> > 
> > Thanks for reply. This is the link of log I am seeing.
> > 
> > http://logs.openstack.org/39/39067dbc1dee99d227f8001595633b5cc98cfc53/release/xstatic-check-version/9172297/ara-report/
> > 
> > 
> > 
> 
> thanks, your analysis is correct, seem we seldom release xstatic packages ;(
> 
> fix is at https://review.openstack.org/559300
> 
> Once that is merged, an infra-root can rerun the release job - please
> ask on #openstack-infra IRC channel,

I've re-enqueued the tag ref and we now have a new failure: 
http://logs.openstack.org/39/39067dbc1dee99d227f8001595633b5cc98cfc53/release/xstatic-check-version/c5baf7e/ara-report/result/09433617-44dd-4ffd-9c57-d62e04dfd75e/.

Reading into that we appear to be running the script from the wrong local 
directory so relative paths don't work as expected. I have proposed 
https://review.openstack.org/559373 to fix this.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] [OVN] Tempest API / Scenario tests and OVN metadata

2018-04-06 Thread Sławek Kapłoński
Hi,

Another idea is to modify test that it will:
1. Check how many ports are in tenant,
2. Set quota to actual number of ports + 1 instead of hardcoded 1 as it is now,
3. Try to add 2 ports - exactly as it is now,

I think that this should be still backend agnostic and should fix this problem.

> Wiadomość napisana przez Sławek Kapłoński  w dniu 
> 06.04.2018, o godz. 17:08:
> 
> Hi,
> 
> I don’t know how networking-ovn is working but I have one question.
> 
> 
>> Wiadomość napisana przez Daniel Alvarez Sanchez  w dniu 
>> 06.04.2018, o godz. 15:30:
>> 
>> Hi,
>> 
>> Thanks Lucas for writing this down.
>> 
>> On Thu, Apr 5, 2018 at 11:35 AM, Lucas Alvares Gomes  
>> wrote:
>> Hi,
>> 
>> The tests below are failing in the tempest API / Scenario job that
>> runs in the networking-ovn gate (non-voting):
>> 
>> neutron_tempest_plugin.api.admin.test_quotas_negative.QuotasAdminNegativeTestJSON.test_create_port_when_quotas_is_full
>> neutron_tempest_plugin.api.test_routers.RoutersIpV6Test.test_router_interface_status
>> neutron_tempest_plugin.api.test_routers.RoutersTest.test_router_interface_status
>> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_prefixlen
>> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_quota
>> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_subnet_cidr
>> 
>> Digging a bit into it I noticed that with the exception of the two
>> "test_router_interface_status" (ipv6 and ipv4) all other tests are
>> failing because the way metadata works in networking-ovn.
>> 
>> Taking the "test_create_port_when_quotas_is_full" as an example. The
>> reason why it fails is because when the OVN metadata is enabled,
>> networking-ovn will metadata port at the moment a network is created
>> [0] and that will already fulfill the quota limit set by that test
>> [1].
>> 
>> That port will also allocate an IP from the subnet which will cause
>> the rest of the tests to fail with a "No more IP addresses available
>> on network ..." error.
>> 
>> With ML2/OVS we would run into the same Quota problem if DHCP would be
>> enabled for the created subnets. This means that if we modify the current 
>> tests
>> to enable DHCP on them and we account this extra port it would be valid for
>> all networking-ovn as well. Does it sound good or we still want to isolate 
>> quotas?
> 
> If DHCP will be enabled for networking-ovn, will it use one more port also or 
> not? If so then You will still have the same problem with DHCP as in ML2/OVS 
> You will have one port created and for networking-ovn it will be 2 ports.
> If it’s not like that then I think that this solution, with some comment in 
> test code why DHCP is enabled should be good IMO.
> 
>> 
>> This is not very trivial to fix because:
>> 
>> 1. Tempest should be backend agnostic. So, adding a conditional in the
>> tempest test to check whether OVN is being used or not doesn't sound
>> correct.
>> 
>> 2. Creating a port to be used by the metadata agent is a core part of
>> the design implementation for the metadata functionality [2]
>> 
>> So, I'm sending this email to try to figure out what would be the best
>> approach to deal with this problem and start working towards having
>> that job to be voting in our gate. Here are some ideas:
>> 
>> 1. Simple disable the tests that are affected by the metadata approach.
>> 
>> 2. Disable metadata for the tempest API / Scenario tests (here's a
>> test patch doing it [3])
>> 
>> IMHO, we don't want to do this as metadata is likely to be enabled in all the
>> clouds either using ML2/OVS or OVN so it's good to keep exercising
>> this part.
>> 
>> 
>> 3. Same as 1. but also create similar tempest tests specific for OVN
>> somewhere else (in the networking-ovn tree?!)
>> 
>> As we discussed on IRC I'm keen on doing this instead of getting bits in
>> tempest to do different things depending on the backend used. Unless
>> we want to enable DHCP on the subnets that these tests create :)
>> 
>> 
>> What you think would be the best way to workaround this problem, any
>> other ideas ?
>> 
>> As for the "test_router_interface_status" tests that are failing
>> independent of the metadata, there's a bug reporting the problem here
>> [4]. So we should just fix it.
>> 
>> [0] 
>> https://github.com/openstack/networking-ovn/blob/f3f5257fc465bbf44d589cc16e9ef7781f6b5b1d/networking_ovn/common/ovn_client.py#L1154
>> [1] 
>> https://github.com/openstack/neutron-tempest-plugin/blob/35bf37d1830328d72606f9c790b270d4fda2b854/neutron_tempest_plugin/api/admin/test_quotas_negative.py#L66
>> [2] 
>> https://docs.openstack.org/networking-ovn/latest/contributor/design/metadata_api.html#overview-of-proposed-approach
>> [3] https://review.openstack.org/#/c/558792/
>> [4] https://bugs.launchpad.net/networking-ovn/+bug/1713835
>> 
>> Cheers,
>> Lucas
>> 
>> 

Re: [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release

2018-04-06 Thread Kashyap Chamarthy
On Fri, Apr 06, 2018 at 06:07:18PM +0200, Thomas Goirand wrote:
> On 04/06/2018 12:07 PM, Kashyap Chamarthy wrote:

[...]

> > Note: You don't even have to build the versions from 'Buster', which are
> > quite new.  Just the slightly more conservative libvirt 3.2.0 and QEMU
> > 2.9.0 -- only if it's possbile.
> 
> Actually, for *official* backports, it's the policy to always update to
> wwhatever is in testing until testing is frozen. 

I see.  Sure, that's fine, too (as "Queens" UCA also has it).  Whatever
is efficient and least painful from a maintenance POV.

> I could maintain an unofficial backport in stretch-stein.debian.net
> though.
>
> > That said ... I just spent comparing the release notes of libvirt 3.0.0
> > and libvirt 3.2.0[1][2].  By using libvirt 3.2.0 and QEMU 2.9.0, Debian 
> > users
> > will be spared from a lot of critical bugs (see all the list in [3]) in
> > CPU comparision area.
> > 
> > [1] 
> > https://www.redhat.com/archives/libvirt-announce/2017-April/msg0.html
> > -- Release of libvirt-3.2.0
> > [2] 
> > https://www.redhat.com/archives/libvirt-announce/2017-January/msg3.html
> > --  Release of libvirt-3.0.0
> > [3] https://www.redhat.com/archives/libvir-list/2017-February/msg01295.html
> 
> So, because of these bugs, would you already advise Nova users to use
> libvirt 3.2.0 for Queens?

FWIW, I'd suggest so, if it's not too much maintenance.  It'll just
spare you additional bug reports in that area, and the overall default
experience when dealing with CPU models would be relatively much better.
(Another way to look at it is, multiple other "conservative" long-term
stable distributions also provide libvirt 3.2.0 and QEMU 2.9.0, so that
should give you confidence.)

Again, I don't want to push too hard on this.  If that'll be messy from
a package maintainance POV for you / Debian maintainers, then we could
settle with whatever is in 'Stretch'.

Thanks for looking into it.

-- 
/kashyap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release

2018-04-06 Thread Matt Riedemann

On 4/6/2018 12:07 PM, Kashyap Chamarthy wrote:

FWIW, I'd suggest so, if it's not too much maintenance.  It'll just
spare you additional bug reports in that area, and the overall default
experience when dealing with CPU models would be relatively much better.
(Another way to look at it is, multiple other "conservative" long-term
stable distributions also provide libvirt 3.2.0 and QEMU 2.9.0, so that
should give you confidence.)

Again, I don't want to push too hard on this.  If that'll be messy from
a package maintainance POV for you / Debian maintainers, then we could
settle with whatever is in 'Stretch'.


Keep in mind that Kashyap has a tendency to want the latest and greatest 
of libvirt and qemu at all times for all of those delicious bug fixes. 
But we also know that new code also brings new not-yet-fixed bugs.


Keep in mind the big picture here, we're talking about bumping from 
minimum required (in Rocky) libvirt 1.3.1 to at least 3.0.0 (in Stein) 
and qemu 2.5.0 to at least 2.8.0, so I think that's already covering 
some good ground. Let's not get greedy. :)


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release

2018-04-06 Thread Thomas Goirand
On 04/06/2018 12:07 PM, Kashyap Chamarthy wrote:
>> dh_install: Cannot find (any matches for) "/usr/lib/*/wireshark/" (tried
>> in "." and "debian/tmp")
>> dh_install: libvirt-wireshark missing files: /usr/lib/*/wireshark/
>> dh_install: missing files, aborting
> 
> That seems like a problem in the Debian packaging system, not in
> libvirt.

It sure is. As I wrote, it should be a minor packaging issue.

>  I double-checked with the upstream folks, and the install
> rules for Wireshark plugin doesn't have /*/ in there.

That part (ie: the path with *) isn't a mistake, it's because Debian has
multiarch support, so for example, we get path like this (just a random
example from my laptop):

/usr/lib/i386-linux-gnu/pulseaudio
/usr/lib/x86_64-linux-gnu/pulseaudio

> Note: You don't even have to build the versions from 'Buster', which are
> quite new.  Just the slightly more conservative libvirt 3.2.0 and QEMU
> 2.9.0 -- only if it's possbile.

Actually, for *official* backports, it's the policy to always update to
whatever is in testing until testing is frozen. I could maintain an
unofficial backport in stretch-stein.debian.net though.

> That said ... I just spent comparing the release notes of libvirt 3.0.0
> and libvirt 3.2.0[1][2].  By using libvirt 3.2.0 and QEMU 2.9.0, Debian users
> will be spared from a lot of critical bugs (see all the list in [3]) in
> CPU comparision area.
> 
> [1] https://www.redhat.com/archives/libvirt-announce/2017-April/msg0.html
> -- Release of libvirt-3.2.0
> [2] 
> https://www.redhat.com/archives/libvirt-announce/2017-January/msg3.html
> --  Release of libvirt-3.0.0
> [3] https://www.redhat.com/archives/libvir-list/2017-February/msg01295.html

So, because of these bugs, would you already advise Nova users to use
libvirt 3.2.0 for Queens?

Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] uncapping eventlet

2018-04-06 Thread Matthew Thode
On 18-04-06 09:02:29, Jens Harbott wrote:
> 2018-04-05 19:26 GMT+00:00 Matthew Thode :
> > On 18-04-05 20:11:04, Graham Hayes wrote:
> >> On 05/04/18 16:47, Matthew Thode wrote:
> >> > eventlet-0.22.1 has been out for a while now, we should try and use it.
> >> > Going to be fun times.
> >> >
> >> > I have a review projects can depend upon if they wish to test.
> >> > https://review.openstack.org/533021
> >>
> >> It looks like we may have an issue with oslo.service -
> >> https://review.openstack.org/#/c/559144/ is failing gates.
> >>
> >> Also - what is the dance for this to get merged? It doesn't look like we
> >> can merge this while oslo.service has the old requirement restrictions.
> >>
> >
> > The dance is as follows.
> >
> > 0. provide review for projects to test new eventlet version
> >projects using eventlet should make backwards compat code changes at
> >this time.
> 
> But this step is currently failing. Keystone doesn't even start when
> eventlet-0.22.1 is installed, because loading oslo.service fails with
> its pkg definition still requiring the capped eventlet:
> 
> http://logs.openstack.org/21/533021/4/check/legacy-requirements-integration-dsvm/7f7c3a8/logs/screen-keystone.txt.gz#_Apr_05_16_11_27_748482
> 
> So it looks like we need to have an uncapped release of oslo.service
> before we can proceed here.
> 

Ya, we may have to uncap and rely on upper-constraints to keep openstack
gate from falling over.  The new steps would be the following:

1. uncap eventlet https://review.openstack.org/559367
2. push uncapped eventlet out via requirements updates to all consumers
3. make review in requirements changing upper-constraints.txt for
   eventlet
4. projects depend on requirements change to do work on the new eventlet
   the patch generated should merge into project without the
   requirements change merged (this means the change should pass in the
   dependant review (to test 0.22.1) AND in a separate non-dependant
   review (test the current constraint).  You would merge the
   non-dependant once both reviews are passing.
5. Once some non-determined set of projects work with the new eventlet
   we'd merge the updated upper-constraint into requirements.

steps 2 and 3 can happen in parallel, projects can move to step 4 after
step 3 is done (step 2 is only needed for their project and their
project's dependencies).

There is bound to be projects that will break as they didn't take the
opportunity to fix themselves, but this should help reduce breakage.  I
suggest a 1 month deadline after step 2/3 is considered complete before
step 5 is performed.

-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] uncapping eventlet

2018-04-06 Thread Clark Boylan
On Fri, Apr 6, 2018, at 9:34 AM, Matthew Thode wrote:
> On 18-04-06 09:02:29, Jens Harbott wrote:
> > 2018-04-05 19:26 GMT+00:00 Matthew Thode :
> > > On 18-04-05 20:11:04, Graham Hayes wrote:
> > >> On 05/04/18 16:47, Matthew Thode wrote:
> > >> > eventlet-0.22.1 has been out for a while now, we should try and use it.
> > >> > Going to be fun times.
> > >> >
> > >> > I have a review projects can depend upon if they wish to test.
> > >> > https://review.openstack.org/533021
> > >>
> > >> It looks like we may have an issue with oslo.service -
> > >> https://review.openstack.org/#/c/559144/ is failing gates.
> > >>
> > >> Also - what is the dance for this to get merged? It doesn't look like we
> > >> can merge this while oslo.service has the old requirement restrictions.
> > >>
> > >
> > > The dance is as follows.
> > >
> > > 0. provide review for projects to test new eventlet version
> > >projects using eventlet should make backwards compat code changes at
> > >this time.
> > 
> > But this step is currently failing. Keystone doesn't even start when
> > eventlet-0.22.1 is installed, because loading oslo.service fails with
> > its pkg definition still requiring the capped eventlet:
> > 
> > http://logs.openstack.org/21/533021/4/check/legacy-requirements-integration-dsvm/7f7c3a8/logs/screen-keystone.txt.gz#_Apr_05_16_11_27_748482
> > 
> > So it looks like we need to have an uncapped release of oslo.service
> > before we can proceed here.
> > 
> 
> Ya, we may have to uncap and rely on upper-constraints to keep openstack
> gate from falling over.  The new steps would be the following:

My understanding of our use of upper constraints was that this should (almost) 
always be the case for (almost) all dependencies.  We should rely on 
constraints instead of requirements caps. Capping libs like pbr or eventlet and 
any other that is in use globally is incredibly difficult to work with when you 
want to uncap it because you have to coordinate globally. Instead if using 
constraints you just bump the constraint and are done.

It is probably worthwhile examining if we have any other deps in the situation 
and proactively addressing them rather than waiting for when we really need to 
fix them.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALL][PTLs] [Community goal] Toggle the debug option at runtime

2018-04-06 Thread Michael Johnson
Yeah, neutron-lbaas runs in the context of the neutron service (it is
a neutron extension), so would be covered by neutron completing the
goal.

Michael

On Fri, Apr 6, 2018 at 3:37 AM, Sławek Kapłoński  wrote:
> Hi,
>
> Thanks Akihiro for help. I added „neutron-dynamic-routing” task to this story 
> and I will push patch for it soon.
> There is still so many things that I need to learn about OpenStack and 
> Neutron :)
>
> —
> Best regards
> Slawek Kaplonski
> sla...@kaplonski.pl
>
>
>
>
>> Wiadomość napisana przez Akihiro Motoki  w dniu 
>> 06.04.2018, o godz. 11:34:
>>
>>
>> Hi Slawek,
>>
>> 2018-04-06 17:38 GMT+09:00 Sławek Kapłoński :
>> Hi,
>>
>> One more question about implementation of this goal. Should we take care 
>> (and add to story board [1]) projects like:
>>
>> In my understanding, tasks in the storyboard story are prepared per project 
>> team listed in the governance.
>> IMHO, repositories which belong to a project team should be handled as a 
>> single task.
>>
>> The situations vary across repositories.
>>
>>
>> openstack/neutron-lbaas
>>
>> This should be covered by octavia team.
>>
>> openstack/networking-cisco
>> openstack/networking-dpm
>> openstack/networking-infoblox
>> openstack/networking-l2gw
>> openstack/networking-lagopus
>>
>> The above repos are not official repos.
>> Maintainers of each repo can follow the community goal, but there is no need 
>> to be tracked as the neutron team.
>>
>> openstack/neutron-dynamic-routing
>>
>> This repo is part of the neutron team. We, the neutron team need to cover 
>> this.
>>
>> FYI: The official repositories covered by the neutron team is available here.
>> https://governance.openstack.org/tc/reference/projects/neutron.html
>>
>> Thanks,
>> Akihiro
>>
>>
>> Which looks that should be probably also changed in some way. Or maybe list 
>> of affected projects in [1] is „closed” and if some project is not there it 
>> shouldn’t be changed to accomplish this community goal?
>>
>> [1] https://storyboard.openstack.org/#!/story/2001545
>>
>> —
>> Best regards
>> Slawek Kaplonski
>> sla...@kaplonski.pl
>>
>>
>>
>>
>> > Wiadomość napisana przez ChangBo Guo  w dniu 
>> > 26.03.2018, o godz. 14:15:
>> >
>> >
>> > 2018-03-22 16:12 GMT+08:00 Sławomir Kapłoński :
>> > Hi,
>> >
>> > I took care of implementation of [1] in Neutron and I have couple 
>> > questions to about this goal.
>> >
>> > 1. Should we only change "restart_method" to mutate as is described in [2] 
>> > ? I did already something like that in [3] - is it what is expected?
>> >
>> >  Yes , let's the only  thing.  we need test if that if it works .
>> >
>> > 2. How I can check if this change is fine and config option are mutable 
>> > exactly? For now when I change any config option for any of neutron agents 
>> > and send SIGHUP to it it is in fact "restarted" and config is reloaded 
>> > even with this old restart method.
>> >
>> > good question, we indeed thought this question when we proposal  the 
>> > goal.  But It seems difficult to test  that consuming projects like 
>> > Neutron automatically.
>> >
>> > 3. Should we add any automatic tests for such change also? Any examples of 
>> > such tests in other projects maybe?
>> >  There is no example for tests now, we only have some unit tests  in 
>> > oslo.service .
>> >
>> > [1] 
>> > https://governance.openstack.org/tc/goals/rocky/enable-mutable-configuration.html
>> > [2] https://docs.openstack.org/oslo.config/latest/reference/mutable.html
>> > [3] https://review.openstack.org/#/c/554259/
>> >
>> > —
>> > Best regards
>> > Slawek Kaplonski
>> > sla...@kaplonski.pl
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> > --
>> > ChangBo Guo(gcb)
>> > Community Director @EasyStack
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Re: [openstack-dev] [zun] zun-api error

2018-04-06 Thread Murali B
Hi Hongbin Lu,

Thank you. After changing the endpoint it worked. Actually I was using
magnum service also. I used the service as "container" for magnum  that is
why its is going to 9511 instead of 9517
After I corrected it worked.

Thanks
-Murali

On Fri, Apr 6, 2018 at 8:45 AM, Hongbin Lu  wrote:

> Hi Murali,
>
> It looks your zunclient was sending API requests to
> http://10.11.142.2:9511/v1/services , which doesn't seem to be the right
> API endpoint. According to the Keystone endpoint you configured, the API
> endpoint of Zun should be http://10.11.142.2:9517/v1/services
>  (it is on port 9517 instead of
> 9511).
>
> What confused the zunclient is the endpoint's type you configured in
> Keystone. Zun expects an endpoint of type "container" but it was configured
> to be "zun-container" in your setup. I believe the error will be resolved
> if you can update the Zun endpoint from type "zun-container" to type
> "container". Please give it a try and let us know.
>
> Best regards,
> Hongbin
>
> On Thu, Apr 5, 2018 at 7:27 PM, Murali B  wrote:
>
>> Hi Hongbin,
>>
>> Thank you for your help
>>
>> As per the our discussion here is the output for my current api on pike.
>> I am not sure which version of zun client  client  I should use for pike
>>
>> root@cluster3-2:~/python-zunclient# zun service-list
>> ERROR: Not Acceptable (HTTP 406) (Request-ID:
>> req-be69266e-b641-44b9-9739-0c2d050f18b3)
>> root@cluster3-2:~/python-zunclient# zun --debug service-list
>> DEBUG (extension:180) found extension EntryPoint.parse('vitrage-keycloak
>> = vitrageclient.auth:VitrageKeycloakLoader')
>> DEBUG (extension:180) found extension EntryPoint.parse('vitrage-noauth =
>> vitrageclient.auth:VitrageNoAuthLoader')
>> DEBUG (extension:180) found extension EntryPoint.parse('noauth =
>> cinderclient.contrib.noauth:CinderNoAuthLoader')
>> DEBUG (extension:180) found extension EntryPoint.parse('v2token =
>> keystoneauth1.loading._plugins.identity.v2:Token')
>> DEBUG (extension:180) found extension EntryPoint.parse('none =
>> keystoneauth1.loading._plugins.noauth:NoAuth')
>> DEBUG (extension:180) found extension EntryPoint.parse('v3oauth1 =
>> keystoneauth1.extras.oauth1._loading:V3OAuth1')
>> DEBUG (extension:180) found extension EntryPoint.parse('admin_token =
>> keystoneauth1.loading._plugins.admin_token:AdminToken')
>> DEBUG (extension:180) found extension EntryPoint.parse('v3oidcauthcode =
>> keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAuth
>> orizationCode')
>> DEBUG (extension:180) found extension EntryPoint.parse('v2password =
>> keystoneauth1.loading._plugins.identity.v2:Password')
>> DEBUG (extension:180) found extension EntryPoint.parse('v3samlpassword =
>> keystoneauth1.extras._saml2._loading:Saml2Password')
>> DEBUG (extension:180) found extension EntryPoint.parse('v3password =
>> keystoneauth1.loading._plugins.identity.v3:Password')
>> DEBUG (extension:180) found extension EntryPoint.parse('v3adfspassword =
>> keystoneauth1.extras._saml2._loading:ADFSPassword')
>> DEBUG (extension:180) found extension EntryPoint.parse('v3oidcaccesstoken
>> = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAccessToken')
>> DEBUG (extension:180) found extension EntryPoint.parse('v3oidcpassword =
>> keystoneauth1.loading._plugins.identity.v3:OpenIDConnectPassword')
>> DEBUG (extension:180) found extension EntryPoint.parse('v3kerberos =
>> keystoneauth1.extras.kerberos._loading:Kerberos')
>> DEBUG (extension:180) found extension EntryPoint.parse('token =
>> keystoneauth1.loading._plugins.identity.generic:Token')
>> DEBUG (extension:180) found extension 
>> EntryPoint.parse('v3oidcclientcredentials
>> = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectClie
>> ntCredentials')
>> DEBUG (extension:180) found extension EntryPoint.parse('v3tokenlessauth
>> = keystoneauth1.loading._plugins.identity.v3:TokenlessAuth')
>> DEBUG (extension:180) found extension EntryPoint.parse('v3token =
>> keystoneauth1.loading._plugins.identity.v3:Token')
>> DEBUG (extension:180) found extension EntryPoint.parse('v3totp =
>> keystoneauth1.loading._plugins.identity.v3:TOTP')
>> DEBUG (extension:180) found extension 
>> EntryPoint.parse('v3applicationcredential
>> = keystoneauth1.loading._plugins.identity.v3:ApplicationCredential')
>> DEBUG (extension:180) found extension EntryPoint.parse('password =
>> keystoneauth1.loading._plugins.identity.generic:Password')
>> DEBUG (extension:180) found extension EntryPoint.parse('v3fedkerb =
>> keystoneauth1.extras.kerberos._loading:MappedKerberos')
>> DEBUG (extension:180) found extension EntryPoint.parse('v1password =
>> swiftclient.authv1:PasswordLoader')
>> DEBUG (extension:180) found extension EntryPoint.parse('token_endpoint =
>> openstackclient.api.auth_plugin:TokenEndpoint')
>> DEBUG (extension:180) found extension EntryPoint.parse('gnocchi-basic =
>> gnocchiclient.auth:GnocchiBasicLoader')
>> DEBUG (extension:180)