Re: [openstack-dev] [nova] [placement] resource providers update 43

2017-12-01 Thread Matt Riedemann

On 12/1/2017 10:42 AM, Chris Dent wrote:


December? Wherever does the time go? This is resource providers and
placement update 43. The first one of these was more than a year ago

 
http://lists.openstack.org/pipermail/openstack-dev/2016-November/107171.html 



I like to think they've been pretty useful. I know they've helped me
keep track of stuff, and have a bit of focus. I'll carry on doing them
but I'm starting to worry that they are getting too big, both to read
and to create, and that this means something, not sure what, for the
volume of work we're trying to accomplish. There's so much work going
on all the time related to placement, writing it down in one place is
rather challenging, so surely creating and reviewing it all is also
challenging? And that's not taking into consideration the vast volume
of all the other stuff within the nova umbrella. Not sure what to do
about it, but something to start thinking about.



Thanks for continuing to do these. I don't read every one, but when I 
do, like tonight (read the whole damn thing), I end up clicking on a lot 
of the review links and going through a lot of them which moves the ball 
forward on some simple but important patches.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Proposing Ronelle Landy for Tripleo-Quickstart/Extras/CI core

2017-12-01 Thread Matt Young
+1


On Thu, Nov 30, 2017 at 10:34 AM, Dan Prince  wrote:

> +1
>
> On Wed, Nov 29, 2017 at 2:34 PM, John Trowbridge  wrote:
>
>> I would like to propose Ronelle be given +2 for the above repos. She has
>> been a solid contributor to tripleo-quickstart and extras almost since the
>> beginning. She has solid review numbers, but more importantly has always
>> done quality reviews. She also has been working in the very intense rover
>> role on the CI squad in the past CI sprint, and has done very well in that
>> role.
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] MessagingTimeout in block live-migration due to long image fetch operation

2017-12-01 Thread Matt Riedemann

On 11/28/2017 9:13 AM, Gustavo Randich wrote:

(running Mitaka)

When doing block live-migration, if the image / backing file is not 
present at destination host, sometimes pre-live migration fails after 60 
seconds as shown below. Retrying the migration to the same destination 
host succeeds.


It seems that an rpc_response_timeout of 60 seconds is not enough for 
this scenario, in which fetching the image involves 90 seconds. We don't 
like to increase rpc_response_timeout  to say, 120 seconds, only for 
this reason ('cause in other kind of errors we prefer to fail fast).


Given that migrations are usually long, shouldn't this operation be 
under the scope of a configurable timeout such as 
live_migration_progress_timeout or live_migration_completion_timeout 
which overrides the default rpc timeout?


I think we've talked about adding a config option or somehow doing rpc 
timeouts differently for operations that we know are prone to timeouts, 
so I don't think people would be against a config option for this. I 
know there is at least one place in nova where we specify an rpc 
response timeout which is not the default.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] priorities for the week (12/01-12/07)

2017-12-01 Thread Brian Rosmaita
Hello Glancers,

As discussed at yesterday's Glance meeting, the priority for this week
is getting ready for the release of the Q-2 milestone, so:

1. the scrubber refactor
2. bugs scheduled for Q-2
3. enhanced tests for interoperable image import ("IIR")

I've put a list of patches and their current status on an etherpad:
  https://etherpad.openstack.org/p/glance-queens-Q2

Please keep it updated as you work through the items.  Several of the
bugs impact the same file, so there may be a need to rebase and
re-approve a few of these patches.

All changes must be approved by 12:00 UTC on Wednesday 6 December to
make it into the Q-2 milestone release.

cheers,
brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [tc] Tempest Plugin Split Goal Queens-2 Update

2017-12-01 Thread Chandan kumar
Hello,

As Queens Milestone 2 approaches its end, here is the second iteration
of updates on Queens Tempest Plugin Split community goal [1].

**Not Started**
Congress
ec2-api
freezer
mistral
monasca
senlin
tacker
Telemetry
Trove
Vitrage

** In Progress **
Cinder
Heat
Ironic
magnum
manila
Neutron
murano
networking-l2gw
octavia

** Completed **
Barbican
CloudKitty
Designate
Horizon
Keystone
Kuryr
Sahara
Solum
Tripleo
Watcher
Winstackers
Zaqar
Zun

Here is the list of open reviews:
https://review.openstack.org/#/q/topic:goal-split-tempest-plugins+status:open

Here is the detailed report on Tempest Plugin split goal status for
different projects:
https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html#project-teams

If you are willing to help on the **not started**, that would be great help.

Links:
[1]. https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html

Thanks,

Chandan Kumar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Super fun unshelve image_ref bugs

2017-12-01 Thread Dean Troyer
On Fri, Dec 1, 2017 at 2:47 PM, Matt Riedemann  wrote:
> Andrew Laski also mentioned in IRC that we didn't replace the original
> instance.image_ref with the shelved image id because the shelve operation
> should be transparent to the end user, they have the same image (not
> really), same volumes, same IPs, etc once they unshelve. And he mentioned
> that if you rebuild, for example, you'd then rebuild to the original image
> instead of the shelved snapshot image.

I was wondering about exactly this.  As a cloud user I would expect
rebuild without explicitly specifying an image to give me the same
thing it did on the initial build.  I suppose it would depend if
shelve/unshelve is supposed to me transparent like Andrew mentioned.
I would assume it would be without reading differently.

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Super fun unshelve image_ref bugs

2017-12-01 Thread Matt Riedemann

On 12/1/2017 1:25 PM, Mathieu Gagné wrote:

Hi,

On Fri, Dec 1, 2017 at 12:24 PM, Matt Riedemann  wrote:


I think we can assert as follows:

2. If we're going to point the instance at an image_ref, we shouldn't delete
that image. I don't have a good reason why besides deleting things which
nova has a reference to in an active instance generally causes problems
(volumes, ports, etc).



Not 100% related to your initial problem but...

Does it mean that you still shouldn't delete any images in Glance
until you are 100% sure that NO instances use them?
If you shouldn't ever delete an image, are there any plans to address
it in the future? Or is it a known "limitation" that people just have
to live with?
If you can delete an image used by one or more instances, how is
shelving affected or different? Should it not be affected?



I'm not sure honestly. I know a snapshot image has a reference to the 
instance uuid that created it, but I don't know if there are any 
restrictions on the glance side from deleting those images before the 
image is fully uploaded and active, something like that.


Andrew Laski also mentioned in IRC that we didn't replace the original 
instance.image_ref with the shelved image id because the shelve 
operation should be transparent to the end user, they have the same 
image (not really), same volumes, same IPs, etc once they unshelve. And 
he mentioned that if you rebuild, for example, you'd then rebuild to the 
original image instead of the shelved snapshot image.


I'm not sure how much I agree with that rebuild argument. I understand 
it, but I'm not sure I agree with it. I think it's much easier to just 
track things for what they are, which means saying if you create a guest 
from a given image id, then track that in the instances table, don't lie 
about it being something else.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] [policy] [rbac] RBAC status & progress

2017-12-01 Thread Lance Bragstad
Hi all,

I'm following up a thread we started earlier this year with proposals
for fixing RBAC [0]. Just wanted to give a quick update that the
specification has merged [1] and the implementation is underway [2]. I
will have a few more patches up shortly to handle the token scoping bits.

Adding the operator list so that they can start playing with the
proposed implementation if they want to.

Let me know if you have any questions,

Lance

[0] http://lists.openstack.org/pipermail/openstack-dev/2017-June/118047.html
[1]
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/queens/system-scope.html
[2]
https://review.openstack.org/#/q/status:open+branch:master+topic:bp/system-scope




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Super fun unshelve image_ref bugs

2017-12-01 Thread Mathieu Gagné
Hi,

On Fri, Dec 1, 2017 at 12:24 PM, Matt Riedemann  wrote:
>
> I think we can assert as follows:
>
> 2. If we're going to point the instance at an image_ref, we shouldn't delete
> that image. I don't have a good reason why besides deleting things which
> nova has a reference to in an active instance generally causes problems
> (volumes, ports, etc).
>

Not 100% related to your initial problem but...

Does it mean that you still shouldn't delete any images in Glance
until you are 100% sure that NO instances use them?
If you shouldn't ever delete an image, are there any plans to address
it in the future? Or is it a known "limitation" that people just have
to live with?
If you can delete an image used by one or more instances, how is
shelving affected or different? Should it not be affected?

--
Mathieu

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [os-brick][cinder][nova] Reintroduce the StorPool driver

2017-12-01 Thread Peter Penchev
We have just restored and submitted new patchsets for the StorPool
block storage driver in the three components:
- os-brick: https://review.openstack.org/#/c/192639/
- cinder: https://review.openstack.org/#/c/220155/
- nova: https://review.openstack.org/#/c/140733/

Now, while we do realize that the milestone 1 deadline is almost upon
us, the situation is somewhat complicated: there is a large downstream
provider of managed OpenStack installations that works with a customer
that wants StorPool installed.  The managed OpenStack provider would
strongly prefer that the StorPool drivers be upstreamed before they
include them in their own distribution, so here we are with a bit of
unfortunate timing indeed.

The Cinder driver is virtually the same as the one submitted
previously (and almost the same as the one that was included and then
removed from Cinder due to our failure to resolve some problems in our
CI system).  The Nova and os-brick drivers are also virtually the same
as the ones submitted previously.  Part of the reason that we are
submitting this so close to the milestone 1 deadline is our hope that,
at least for Cinder, this may still count as a reintroduction of a
driver that was once present and not necessarily treated as a new
driver.

Our CI system is almost ready and will be up in a matter of days if
not hours (and, yes, we shall need to talk to the third-party CI list
about that, too).

Thanks in advance to everyone for their time and their work on
OpenStack, and here's hoping that there is at least a chance for the
StorPool driver to make it into the Queens release.

Best regards,
Peter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api] POST /api-sig/news

2017-12-01 Thread Chris Dent

On Fri, 1 Dec 2017, Gilles Dubreuil wrote:


Hi Chris,

Thank you for those precious details.

I just added https://review.openstack.org/#/c/524467/ to augment the existing 
guidelines [2] and to get started with the API Schema (consumption) topic.


Cool, thanks for doing that. I suspect comments should start rolling in
next week.

It would be great if that topic could be added to the agenda, can you please 
help?


Feel free to add any topics that you (or anyone else) wants to the
API-SIG agenda: https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda

If you don't have and can't get editing rights on the wiki, let me
know and I can make the addition.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Super fun unshelve image_ref bugs

2017-12-01 Thread Matt Riedemann

I came across this bug during triage today:

https://bugs.launchpad.net/nova/+bug/1732428

It essentially says that unshelving an instance and then resizing that 
instance later, depending on the type of image backend, can fail.


It's pointed out that when we complete the unshelve procedure, we set 
the instance.image_ref back to the original image_ref used to create the 
instance, rather than leave it pointing at the shelved instance snapshot 
image id.


I thought, "well that's crazy, the instance isn't backed by the original 
image anymore, it's backed by the snapshot image, so instance.image_ref 
should point at the snapshot image id now." But lo, in true 
shelve-tastic form, it turns out that would cause more bugs.


Because after we successfully spawn the guest during unshelve, we delete 
the snapshot image:


https://github.com/openstack/nova/blob/b6a245f0425a07be3871a976952646d2bdd44533/nova/compute/manager.py#L4797

So at this point, you've unshelved your instance, but the 
instance.image_ref is pointing at image A but was really created from 
image B.


Does anyone have ANY idea why we do this? Even if we delete the snapshot 
image, why would we change the image_ref back to the original image?


I think we can assert as follows:

1. After you've unshelved an instance, it's image_ref (unless 
volume-backed, because that's different crazy, not discussed here) 
should point at the image used to create the guest. Agree?


2. If we're going to point the instance at an image_ref, we shouldn't 
delete that image. I don't have a good reason why besides deleting 
things which nova has a reference to in an active instance generally 
causes problems (volumes, ports, etc).


Am I missing something that everyone else knew about way back in 2013?

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] office hours report 2017-11-28

2017-12-01 Thread Lance Bragstad
Hey all,

Here is the weekly report for what was accomplished during office hours
this week. Full logs are available [0].

Bug #1734871 in OpenStack Identity (keystone): "overcloud deployment
fails on mistral action DeployStackAction"
https://bugs.launchpad.net/keystone/+bug/1734871
Triaged, reviewed, and merged a fix

Bug #1524030 in OpenStack Identity (keystone): "Reduce revocation events
for performance improvement"
https://bugs.launchpad.net/keystone/+bug/1524030
Rebased and merged the last fix needed to close this issue

Bug #1727099 in OpenStack Identity (keystone): "Change password error
history message count is wrong"
https://bugs.launchpad.net/keystone/+bug/1727099
Reviewed and merged a fix to close this issue

Bug #1734549 in OpenStack Identity (keystone): "keystone-manage db_sync
docs missing release"
https://bugs.launchpad.net/keystone/+bug/1734549
Reviewed and merged fix to close this issue

Bug #1662623 in OpenStack Identity (keystone): "Testing keystone docs
are outdated"
https://bugs.launchpad.net/keystone/+bug/1662623
Patch in review

Bug #1733836 in OpenStack Identity (keystone): "Support LDAP server
discovery via DNS SRV records"
https://bugs.launchpad.net/keystone/+bug/1733836
Discussed and triaged

Bug #1724686 in OpenStack Identity (keystone): "authentication code
hangs when there are three or more admin keystone endpoints"
https://bugs.launchpad.net/keystone/+bug/1724686
Discussed and asked for more information to recreate

Bug #1734244 in OpenStack Identity (keystone): "keystone raise 500 error
when create trust with invalid role key  "
https://bugs.launchpad.net/keystone/+bug/1734244
Reviewed and merged fix to close this issue

[0]
http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2017-11-28.log.html#t2017-11-28T19:07:08




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ovn] networking-ovn core team update

2017-12-01 Thread Anil Venkata
Congrats Daniel

On 01-Dec-2017 10:22 PM, "Jakub Libosvar"  wrote:

> Congratulations! Very well deserved! :)
>
> On 01/12/2017 17:45, Lucas Alvares Gomes wrote:
> > Hi all,
> >
> > I would like to welcome Daniel Alvarez to the networking-ovn core team!
> >
> > Daniel has been contributing with the project for a good time already
> > and helping *a lot* with reviews and code.
> >
> > Welcome onboard man!
> >
> > Cheers,
> > Lucas
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ovn] networking-ovn core team update

2017-12-01 Thread Jakub Libosvar
Congratulations! Very well deserved! :)

On 01/12/2017 17:45, Lucas Alvares Gomes wrote:
> Hi all,
> 
> I would like to welcome Daniel Alvarez to the networking-ovn core team!
> 
> Daniel has been contributing with the project for a good time already
> and helping *a lot* with reviews and code.
> 
> Welcome onboard man!
> 
> Cheers,
> Lucas
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ovn] networking-ovn core team update

2017-12-01 Thread Daniel Alvarez Sanchez
Thanks a lot guys!
It's a pleasure to work with you all :)

Cheers,
Daniel

On Fri, Dec 1, 2017 at 5:48 PM, Miguel Angel Ajo Pelayo  wrote:

> Welcome Daniel! :)
>
> On Fri, Dec 1, 2017 at 5:45 PM, Lucas Alvares Gomes  > wrote:
>
>> Hi all,
>>
>> I would like to welcome Daniel Alvarez to the networking-ovn core team!
>>
>> Daniel has been contributing with the project for a good time already
>> and helping *a lot* with reviews and code.
>>
>> Welcome onboard man!
>>
>> Cheers,
>> Lucas
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][ovn] networking-ovn core team update

2017-12-01 Thread Miguel Angel Ajo Pelayo
Welcome Daniel! :)

On Fri, Dec 1, 2017 at 5:45 PM, Lucas Alvares Gomes 
wrote:

> Hi all,
>
> I would like to welcome Daniel Alvarez to the networking-ovn core team!
>
> Daniel has been contributing with the project for a good time already
> and helping *a lot* with reviews and code.
>
> Welcome onboard man!
>
> Cheers,
> Lucas
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][ovn] networking-ovn core team update

2017-12-01 Thread Lucas Alvares Gomes
Hi all,

I would like to welcome Daniel Alvarez to the networking-ovn core team!

Daniel has been contributing with the project for a good time already
and helping *a lot* with reviews and code.

Welcome onboard man!

Cheers,
Lucas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [placement] resource providers update 43

2017-12-01 Thread Chris Dent


December? Wherever does the time go? This is resource providers and
placement update 43. The first one of these was more than a year ago

http://lists.openstack.org/pipermail/openstack-dev/2016-November/107171.html

I like to think they've been pretty useful. I know they've helped me
keep track of stuff, and have a bit of focus. I'll carry on doing them
but I'm starting to worry that they are getting too big, both to read
and to create, and that this means something, not sure what, for the
volume of work we're trying to accomplish. There's so much work going
on all the time related to placement, writing it down in one place is
rather challenging, so surely creating and reviewing it all is also
challenging? And that's not taking into consideration the vast volume
of all the other stuff within the nova umbrella. Not sure what to do
about it, but something to start thinking about.

# Most Important

The vast and intertwingled mass which is nested resource providers, a
variety of database cleanups, traits handling, assorted bugfixes and
test additions is probably the best place to focus some attention.
Many things hinge on that work. One entry point is the topic for the
n-r-p blueprint:

https://review.openstack.org/#/q/topic:bp/nested-resource-providers

# What's Changed

Microversions 1.12 and 13 of the placement API have merged. These add
a new dict-like representation for PUTting
/allocations/{consumer_uuid} (for sake of symmetry with GET) and allow
POSTting multiple allocations (including clearing them) to
/allocations.

https://docs.openstack.org/nova/latest/user/placement.html#put-dict-format-to-allocations-consumer-uuid

The backend for nested resource providers is in place, but the HTTP
API for that has not yet been exposed (it's very close). This, along
with others changes, has meant some changes to all the database
handling, so there's some potential for bugs to be exposed, be on the
lookout.

Matt learned that when some people are migrating to newer versions of
Nova that require placement, their internal policy handling is not
aligning with the very simple (intentionally so) way that placement
handles policy. If people don't have an 'admin' role, placement won't
go and patches like this are required:

https://gist.github.com/mgagne/b43c1e085c1f1d50bebc054a7d387688

To start thinking about dealing with this Matt's posted a
policy-in-code that generates a sample policy:

https://review.openstack.org/#/c/524425/

One of the goals of placement has been to minimize config files and
configurability, but apparently this is something that is going to
need some degree of flexibility.

# Help Wanted

A takeaway from summit is that we need, where possible, benchmarking
info from people who are making the transition from old methods of
scheduling to the newer allocation_candidate driven modes.  While
detailed numbers will be most useful, even anecdotal summaries of
"woot it's way better" or "hmmm, no it seems worse" are useful.

# Docs

There's an effort in progress to enhance the placement docs:

 https://review.openstack.org/#/q/topic:bp/placement-doc-enhancement-queens

This is great to see. Docs needs continuous refactoring, they are
pretty much impossible to get perfect in one go. Additional
docs-related chanages:

* https://review.openstack.org/#/c/512215/
  Add create inventories doc for placement

* https://review.openstack.org/#/c/523007/
  Add x-openstack-request-id in API ref

* https://review.openstack.org/#/c/521502/
  Add aggregate link note in API ref

* https://review.openstack.org/#/c/521541/
  Add 'Location' parameters in API ref

* https://review.openstack.org/#/c/511342/
  add API reference for create inventory

## Nested Providers

As mentioned above there's a lot of code on this topic

 https://review.openstack.org/#/q/topic:bp/nested-resource-providers

on both sides of the HTTP divide.

## Alternate Hosts

Having the scheduler request and use alternate hosts is getting close:

   https://review.openstack.org/#/q/topic:bp/return-alternate-hosts

## Migration allocations

Do allocation "doubling" using the migration uuid for the consumer for
one half. This is also very close:

 https://review.openstack.org/#/c/507638/

The concept of migration allocations is what drove the work to enable
the POST /allocations handling now at microversion 1.13, so we have
the option to start using that power. Dan helpfully left comments in
the code to indicate where it could be done.

## Misc Traits, Shared, Etc Cleanups

There's a stack of code that's not attached to a blueprint, starting
at

   https://review.openstack.org/#/c/517119/

that fixes up a lot of things related to traits, sharing providers,
test additions and fixes to those tests. At the moment they are a bug
topic:

https://review.openstack.org/#/q/topic:bug/1702420

But that is not the only bug they are addressing. Some of the above
probably appear in the list below too.

# Other

This list starts with in 

Re: [openstack-dev] [OpenStack-Dev] [Nova][Neutron][Horizon][Cinder][Keystone][Glance][Ironic][Swift] Fault Classification Input Request

2017-12-01 Thread Matt Riedemann

On 11/30/2017 6:05 PM, Nematollah Bidokhti wrote:

Hi,

Our [Fault-Genes WG] has been working on defining the fault 
classifications for key OpenStack projects in an effort to support 
OpenStack fault management & self-healing.


We have been using machine learning (unsupervised data) as a method to 
look into all bugs and issues submitted by the community and it has been 
very challenging to define the classification completely by the machine.


We have decided to go with supervised data set. In order to do this, we 
need to come up with our training data.


We need your help to generate the training data set. *Basically, we only 
need 2 or 3 unique fault classifications with a short description and 
the associated mitigations _from each member who is familiar with 
OpenStack design & operation_. This way we can build a focused library 
of faults & mitigations for each project.*


Once this data is accumulated, we will develop our own specific 
algorithms that can be applied to all future OpenStack issues.


Thanks in advance for your support.

*No.*



*Project*



*Fault Classification*



*Description*



*Root Cause*



*Mitigation*

*1*



**



**



**



**



**

*2*



**



**



**



**



**

*3*



**



**



**



**



**

Below are examples of what a couple of developers in Neutron have 
provided. I am sure there are other types of fault classifications in 
Neurton that have not been captured in this table.


*Fault Classification*



*Root Cause*



*Mitigation*

Network Connectivity Issues



Virtual interface in the VM admin down



Un-shut the virtual interface

Virtual interface does not have IP address via DHCP



Depends on lower level root cause

Virtual network does not have interface to the router



Add virtual network as one of the router interfaces

vNICport of VM not active (stuck in build)



Depends on lower level root cause

Security group lock in traffic



Fix the security group to allow relevant traffic

Unable to Add Port to Bridge



Libvirtdin Apparmor is blocking



allow Libvirtd profile in Appamor

No Valid Host Found/insufficient hypervisor resources



Compute nodes do not have sufficient resources



free up required compute storage and memory resources on compute node

No Resource



Configuration issues



Change config setting

Authentication/permissions error



Configuration error such as port # or Password



Make sure end points are properly configured

Gateway access not reachable



Use custom keep-alive health-check

Design issue of OpenStack Network node



Out of band health checking mechanism

Security Group Mis-configuration



The security group



Change security rules/Programming the security group

DNS Attack



Implement CERT alerts updates

Network design issue



Network storm



Reduce L2 broadcast domain

Nemat



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I'm not entirely sure how you classify some of this stuff.

For example, here is a nova/neutron bug in triage:

https://bugs.launchpad.net/nova/+bug/1730637

In this case, the user tries to attach a port to an instance and it 
fails with a port binding failure.


From the nova side, we have no idea if this is a user error or a 
problem in the networking backend. Therefore I wouldn't know how to 
classify this, or describe the root cause or how to mitigate it.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Updates on the TripleO on Kubernetes work

2017-12-01 Thread Bogdan Dobrelya

On 12/1/17 5:11 PM, Jiří Stránský wrote:

On 21.11.2017 12:01, Jiří Stránský wrote:

Kubernetes on the overcloud
===

The work on this front started with 2[0][1] patches that some of you 
might have
seen and then evolved into using the config download mechanism to 
execute these
tasks as part of the undercloud tasks[2][3] (Thanks a bunch, Jiri, 
for your work
here). Note that [0] needs to be refactored to use the same mechanism 
used in

[2].


For those interested in trying the work we've done on deploying vanilla
Kubernetes, i put together a post showing how to deploy it with OOOQ,
and also briefly explaining the new external_deploy_tasks in service
templates:

https://www.jistr.com/blog/2017-11-21-kubernetes-in-tripleo/


And we've had a first Kubespray deployment success in CI, the job is 
ready to move from experimental to non-voting check [1]. The job doesn't 
yet deploy any pods on that Kubernetes cluster, but it's a step ;)


Well done.
Note that deployed with a netchecker app [0], it puts some pods on that 
cluster, and runs free connectivity (DNS) checks as a bonus. Works even 
better multinode, as it checks N to N connectivity mesh, IIRC.


[0] 
https://github.com/kubernetes-incubator/kubespray/blob/master/docs/netcheck.md




[1] https://review.openstack.org/#/c/524547/





There are quite a few things to improve here:

- How to configure/manage the loadbalancer/vips on the overcloud 
Kubespray is
- currently being cloned and we need to build a package for it More 
CI is likely

- needed for this work

[0] https://review.openstack.org/494470
[1] https://review.openstack.org/471759
[2] https://review.openstack.org/#/c/511272/
[3] https://review.openstack.org/#/c/514730/


__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Keystone weekly update - Week of 27 November 2017

2017-12-01 Thread Harry Rybacki
On Fri, Dec 1, 2017 at 11:09 AM, Colleen Murphy  wrote:
> In the "Making OpenStack More Palatable to Part-Time Contributors"
> Forum session in Sydney, one barrier to contribution that came up was
> keeping up with everything happening in OpenStack. The dev mailing
> list is a firehose and IRC can be just as daunting, especially for
> contributors in non-Americas timezones. The current time of the weekly
> team meeting basically excludes a third of the world from
> participating. I don't propose we stop having them, but it would be
> good to try to be a little more inclusive. Following the lead of some
> of the other folks in our community, I propose we consolidate the
> mailing list discussions, IRC meetings, and general discussions in a
> weekly update, just to share what we've been up to and what's
> important to know.
>
+2

> I don't guarantee I'll get to this every week but I'll make an effort.
> Please feel free to provide feedback on what you think would be useful
> to see in a newsletter like this. If you want to help out, I created
> an etherpad - feel free to help fill in the sections or edit the
> template itself.
>
With a nice template (based on this email?) I'm sure other Keystone folk
can help out when you find yourself too busy for a given week.

> https://etherpad.openstack.org/p/keystone-team-newsletter
>
> Without further ado, here's what's been going on this week from my 
> perspective:
>
> # Keystone Team Update - Week of 27 November 2017
>
> ## News
>
> Next week we'll use the meeting time to have a video conference to do
> a milestone retrospective for Queens-2:
>
> http://lists.openstack.org/pipermail/openstack-dev/2017-November/124997.html
>
> We abandoned some very old patches in gerrit. If we abandoned one that
> we shouldn't have, come talk to us:
>
> http://lists.openstack.org/pipermail/openstack-dev/2017-November/124910.html
>
> We used the last weekly keystone meeting to talk about open specs. In
> particular we talked about the Unified Limits spec and what the
> implications are for requiring a region ID in order to create a
> registered limit:
>
> http://eavesdrop.openstack.org/meetings/keystone/2017/keystone.2017-11-28-18.00.log.txt
>
> In the weekly policy meeting we talked about using the next round of
> community goals to get projects using the new system scope, but
> decided that we'd like to have a couple of early adopters before
> proposing it community-wide and so we'll likely hold off on proposing
> it until the following cycle. We did decide that we could start a
> community-wide discussion on defining a set of default-roles by
> proposing a cross-project spec.
>
> http://eavesdrop.openstack.org/meetings/policy/2017/policy.2017-11-29-16.00.log.txt
>
> ## Open Specs
>
> Search query: https://goo.gl/pc8cCf
>
> We only have one spec proposed for Queens still under review:
>
> Limits API: https://review.openstack.org/455709
>
> ## Recently Merged Changes
>
> Search query: https://goo.gl/hdD9Kw
>
> We merged 24 changes this week. Notably, we merged a few Queens specs
> and some policy roadmaps:
>
> Repropose application credentials to queens: 
> https://review.openstack.org/512505
> Specification for system roles: https://review.openstack.org/460344
> Outline policy goals: https://review.openstack.org/460344
> Add policy roadmap for security: https://review.openstack.org/#/c/462733/
>
> ## Changes that need Attention
>
> Search query:https://goo.gl/YiLt6o
>
> There are 51 changes that are passing CI and have no negative reviews,
> so these authors are waiting for feedback from reviewers. Please give
> them a look.
>
> That doesn't mean you should ignore changes that are failing CI or
> have negative reviews, it's just that the changes highlighted here are
> more likely to be in the reviewers' court rather than a requiring a
> new revision from the author. Sometimes negative votes are misplaced
> or CI needs to be fixed project-wide so this doesn't necessarily mean
> that this list is the only one to mind.
>
> ## Milestone Outlook
>
> https://releases.openstack.org/queens/schedule.html
>
> Queens-2 is next week. That means the specification freeze is on
> December 8 and all Queens specifications must be merged by then or
> will be pushed to the next release. The only open spec affected by
> this is the Limits API spec.
>
> ## Shout-outs
>
> wangxiyuan has been doing a ton of awesome work squashing our bugs and
> taking on the Unified Limits feature. Thanks wangxiyuan!
>
> ## Help with this newsletter
>
> Help contribute to this newsletter by editing the etherpad:
> https://etherpad.openstack.org/p/keystone-team-newsletter
>
This is a wonderful summary, Colleen, thank you for taking the time to
write this up!
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [tripleo] Updates on the TripleO on Kubernetes work

2017-12-01 Thread Jiří Stránský

On 21.11.2017 12:01, Jiří Stránský wrote:

Kubernetes on the overcloud
===

The work on this front started with 2[0][1] patches that some of you might have
seen and then evolved into using the config download mechanism to execute these
tasks as part of the undercloud tasks[2][3] (Thanks a bunch, Jiri, for your work
here). Note that [0] needs to be refactored to use the same mechanism used in
[2].


For those interested in trying the work we've done on deploying vanilla
Kubernetes, i put together a post showing how to deploy it with OOOQ,
and also briefly explaining the new external_deploy_tasks in service
templates:

https://www.jistr.com/blog/2017-11-21-kubernetes-in-tripleo/


And we've had a first Kubespray deployment success in CI, the job is 
ready to move from experimental to non-voting check [1]. The job doesn't 
yet deploy any pods on that Kubernetes cluster, but it's a step ;)


[1] https://review.openstack.org/#/c/524547/





There are quite a few things to improve here:

- How to configure/manage the loadbalancer/vips on the overcloud Kubespray is
- currently being cloned and we need to build a package for it More CI is likely
- needed for this work

[0] https://review.openstack.org/494470
[1] https://review.openstack.org/471759
[2] https://review.openstack.org/#/c/511272/
[3] https://review.openstack.org/#/c/514730/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Keystone weekly update - Week of 27 November 2017

2017-12-01 Thread Colleen Murphy
In the "Making OpenStack More Palatable to Part-Time Contributors"
Forum session in Sydney, one barrier to contribution that came up was
keeping up with everything happening in OpenStack. The dev mailing
list is a firehose and IRC can be just as daunting, especially for
contributors in non-Americas timezones. The current time of the weekly
team meeting basically excludes a third of the world from
participating. I don't propose we stop having them, but it would be
good to try to be a little more inclusive. Following the lead of some
of the other folks in our community, I propose we consolidate the
mailing list discussions, IRC meetings, and general discussions in a
weekly update, just to share what we've been up to and what's
important to know.

I don't guarantee I'll get to this every week but I'll make an effort.
Please feel free to provide feedback on what you think would be useful
to see in a newsletter like this. If you want to help out, I created
an etherpad - feel free to help fill in the sections or edit the
template itself.

https://etherpad.openstack.org/p/keystone-team-newsletter

Without further ado, here's what's been going on this week from my perspective:

# Keystone Team Update - Week of 27 November 2017

## News

Next week we'll use the meeting time to have a video conference to do
a milestone retrospective for Queens-2:

http://lists.openstack.org/pipermail/openstack-dev/2017-November/124997.html

We abandoned some very old patches in gerrit. If we abandoned one that
we shouldn't have, come talk to us:

http://lists.openstack.org/pipermail/openstack-dev/2017-November/124910.html

We used the last weekly keystone meeting to talk about open specs. In
particular we talked about the Unified Limits spec and what the
implications are for requiring a region ID in order to create a
registered limit:

http://eavesdrop.openstack.org/meetings/keystone/2017/keystone.2017-11-28-18.00.log.txt

In the weekly policy meeting we talked about using the next round of
community goals to get projects using the new system scope, but
decided that we'd like to have a couple of early adopters before
proposing it community-wide and so we'll likely hold off on proposing
it until the following cycle. We did decide that we could start a
community-wide discussion on defining a set of default-roles by
proposing a cross-project spec.

http://eavesdrop.openstack.org/meetings/policy/2017/policy.2017-11-29-16.00.log.txt

## Open Specs

Search query: https://goo.gl/pc8cCf

We only have one spec proposed for Queens still under review:

Limits API: https://review.openstack.org/455709

## Recently Merged Changes

Search query: https://goo.gl/hdD9Kw

We merged 24 changes this week. Notably, we merged a few Queens specs
and some policy roadmaps:

Repropose application credentials to queens: https://review.openstack.org/512505
Specification for system roles: https://review.openstack.org/460344
Outline policy goals: https://review.openstack.org/460344
Add policy roadmap for security: https://review.openstack.org/#/c/462733/

## Changes that need Attention

Search query:https://goo.gl/YiLt6o

There are 51 changes that are passing CI and have no negative reviews,
so these authors are waiting for feedback from reviewers. Please give
them a look.

That doesn't mean you should ignore changes that are failing CI or
have negative reviews, it's just that the changes highlighted here are
more likely to be in the reviewers' court rather than a requiring a
new revision from the author. Sometimes negative votes are misplaced
or CI needs to be fixed project-wide so this doesn't necessarily mean
that this list is the only one to mind.

## Milestone Outlook

https://releases.openstack.org/queens/schedule.html

Queens-2 is next week. That means the specification freeze is on
December 8 and all Queens specifications must be merged by then or
will be pushed to the next release. The only open spec affected by
this is the Limits API spec.

## Shout-outs

wangxiyuan has been doing a ton of awesome work squashing our bugs and
taking on the Unified Limits feature. Thanks wangxiyuan!

## Help with this newsletter

Help contribute to this newsletter by editing the etherpad:
https://etherpad.openstack.org/p/keystone-team-newsletter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] rename ovb jobs?

2017-12-01 Thread Alex Schultz
On Fri, Dec 1, 2017 at 7:54 AM, Emilien Macchi  wrote:
> Bogdan and Dmitry's suggestions are imho a bit too much and would lead
> to very very (very) long names... Do we actually want that?
>

No i don't think so. I think -- is ideal for communicating at least the
basics. If we did it this way for all and if we linked the featureset
docs[0] in the logs for reference it would be an improvement. I
personally dislike the scenarioXXX references because you have to
figure out the featureset/scenario mappings (and remember where those
docs live[1]).

Thanks,
-Alex

[0] 
https://docs.openstack.org/tripleo-quickstart/latest/feature-configuration.html
[1] 
https://github.com/openstack/tripleo-heat-templates/blob/master/README.rst#service-testing-matrix

> On Fri, Dec 1, 2017 at 2:02 AM, Sanjay Upadhyay  wrote:
>>
>>
>> On Fri, Dec 1, 2017 at 2:17 PM, Bogdan Dobrelya  wrote:
>>>
>>> On 11/30/17 8:11 PM, Emilien Macchi wrote:

 A few months ago, we renamed ovb-updates to be
 tripleo-ci-centos-7-ovb-1ctlr_1comp_1ceph-featureset024.
 The name is much longer but it describes better what it's doing.
 We know it's a job with one controller, one compute and one storage
 node, deploying the quickstart featureset n°24.

 For consistency, I propose that we rename all OVB jobs this way.
 For example, tripleo-ci-centos-7-ovb-ha-oooq would become
 tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001
 etc.

 Any thoughts / feedback before we proceed?
 Before someone asks, I'm not in favor of renaming the multinode
 scenarios now, because they became quite familiar now, and it would
 confuse people to rename the jobs.

 Thanks,

>>>
>>> I'd like to see featuresets clarified in names as well. Just to bring the
>>> main message, w/o going into the test matrix details, like
>>> tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-ovn/ceph/k8s/tempest
>>> whatever it is.
>>>
>>
>> How is this looking?
>>
>> tripleo-ci/os/centos/7/ovb/ha/nodes/3ctrlr_1comp.yaml
>> tripleo-ci/os/centos/7/ovb/ha/featureset/ovn_ceph_k8s_with-tempest.yaml
>>
>> I also think we should have clear demarcation of the oooq variables ie
>> machine specific goes to nodes/* and feature related goes to featureset/*
>>
>> regards
>> /sanjay
>>
>>
>>>
>>> --
>>> Best regards,
>>> Bogdan Dobrelya,
>>> Irc #bogdando
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] containerized undercloud update

2017-12-01 Thread Alex Schultz
On Fri, Dec 1, 2017 at 8:05 AM, Alex Schultz  wrote:
> On Thu, Nov 30, 2017 at 2:36 PM, Wesley Hayutin  wrote:
>> Greetings,
>>
>> Just wanted to share some progress with the containerized undercloud work.
>> Ian pushed some of the patches along and we now have a successful undercloud
>> install with containers.
>>
>> The initial undercloud install works [1]
>> The idempotency check failed where we reinstall the undercloud [2]
>>
>> Question: Do we expect the reinstallation to work at this point? Should the
>> check be turned off?
>
> So I would say for the undercloud-container's job it's not required at
> this point but for the main undercloud job yes it is required and
> should not be disabled. This is expected functionality that must be
> replicated in the containers version in order to make the switch.  The
> original ask that I had was that from an operator perspective the
> containerized install works exactly like the non-containerized
> undercloud.
>
>>
>> I will try it w/o the idempotency check, I suspect I will run into errors in
>> a full run with an overcloud deployment.  I ran into issues weeks ago.  I
>> suspect if we do hit something it will be CI related as Dan Price has been
>> deploying the overcloud for a while now.  Dan I may need to review your
>> latest doit.sh scripts to check for diffs in the CI.
>>
>
> What I would propose is switching the undercloud-containers job to use
> the 'openstack undercloud install --use-heat' command and we switch
> that to non-voting and see how it performs. Originally when we

Oops s/non-voting/voting/.  I would like that job voting but I know
we've seen failure issues in comparison with the instack-undercloud
job. That however might be related to the number of times we run the
undercloud-containers job (on all THT patches) than the instack jobs
(just puppet-tripleo and instack-undercloud). So we really need to
understand the passing numbers.

> discussed this I wanted that job voting my milestone 1. Milestone 2 is
> next week so I'm very concerned at the state of this feature.  Do we
> have updates and upgrades with the containerized undercloud being
> tested anywhere in CI? That was one of items that I had mentioned[0]
> as a requirement to do the switch during the queens cycle. What I
> would really like to see is that we get those stable and then we can
> work on actually testing overcloud deploys and the various scenarios
> with the containerized undercloud.  If we update oooq to support
> adding the --use-heat flag it would make testing all the scenarios
> fairly trivial with a single patch and we would be able to see where
> there are issues.
>
> Thanks,
> -Alex
>
> [0] 
> http://lists.openstack.org/pipermail/openstack-dev/2017-October/123065.html
>
>
>> Thanks
>>
>>
>> [1]
>> http://logs.openstack.org/18/518118/6/check/tripleo-ci-centos-7-undercloud-oooq/73115d6/logs/undercloud/home/zuul/undercloud_install.log.txt.gz
>> [2]
>> http://logs.openstack.org/18/518118/6/check/tripleo-ci-centos-7-undercloud-oooq/73115d6/logs/undercloud/home/zuul/undercloud_reinstall.log.txt.gz#_2017-11-30_19_51_26
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] containerized undercloud update

2017-12-01 Thread Alex Schultz
On Fri, Dec 1, 2017 at 3:54 AM, Bogdan Dobrelya  wrote:
> On 11/30/17 10:36 PM, Wesley Hayutin wrote:
>>
>> Greetings,
>>
>> Just wanted to share some progress with the containerized undercloud work.
>> Ian pushed some of the patches along and we now have a successful
>> undercloud install with containers.
>>
>> The initial undercloud install works [1]
>> The idempotency check failed where we reinstall the undercloud [2]
>>
>> Question: Do we expect the reinstallation to work at this point? Should
>> the check be turned off?
>
>
> Yeah, there is a bug for that [0]. Not critical to fix, though nice to have
> for developers. I'm used to deploy with undercloud containers, and it's a
> pain to do a full teardown and reinstall for each change being tested.
>

It may not be critical now, but it is a critical requirement in order
to switch to containerized undercloud by default as this is the way it
functions today with instack-undercloud.

Thanks,
-Alex

> By the way, somewhat related, I have a PoC for undercloud containers
> all-in-one [1], by quickstart off-road. And a few 'enabler' bug-fixes
> [2],[3],[4], JFYI and review please.
>
> I think all-in-one uc may be useful either for CI, or dev cases. Like for
> those who want to deploy *things* on top of openstack, yet suffering from
> healing devstack and searching alternatives, like packstack et al. So they
> may want to switch suffering from healing tripleo (undercloud containers)
> instead.
>
> [0] https://bugs.launchpad.net/tripleo/+bug/1698349
> [1] https://github.com/bogdando/oooq-warp/blob/master/rdocloud-guide.md
> [2] https://review.openstack.org/#/c/524114/
> [3] https://review.openstack.org/#/c/524133/
> [4] https://review.openstack.org/#/c/524187
>
>>
>> I will try it w/o the idempotency check, I suspect I will run into errors
>> in a full run with an overcloud deployment.  I ran into issues weeks ago.  I
>> suspect if we do hit something it will be CI related as Dan Price has been
>> deploying the overcloud for a while now.  Dan I may need to review your
>> latest doit.sh scripts to check for diffs in the CI.
>>
>> Thanks
>>
>>
>> [1]
>> http://logs.openstack.org/18/518118/6/check/tripleo-ci-centos-7-undercloud-oooq/73115d6/logs/undercloud/home/zuul/undercloud_install.log.txt.gz
>> [2]
>> http://logs.openstack.org/18/518118/6/check/tripleo-ci-centos-7-undercloud-oooq/73115d6/logs/undercloud/home/zuul/undercloud_reinstall.log.txt.gz#_2017-11-30_19_51_26
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] containerized undercloud update

2017-12-01 Thread Alex Schultz
On Thu, Nov 30, 2017 at 2:36 PM, Wesley Hayutin  wrote:
> Greetings,
>
> Just wanted to share some progress with the containerized undercloud work.
> Ian pushed some of the patches along and we now have a successful undercloud
> install with containers.
>
> The initial undercloud install works [1]
> The idempotency check failed where we reinstall the undercloud [2]
>
> Question: Do we expect the reinstallation to work at this point? Should the
> check be turned off?

So I would say for the undercloud-container's job it's not required at
this point but for the main undercloud job yes it is required and
should not be disabled. This is expected functionality that must be
replicated in the containers version in order to make the switch.  The
original ask that I had was that from an operator perspective the
containerized install works exactly like the non-containerized
undercloud.

>
> I will try it w/o the idempotency check, I suspect I will run into errors in
> a full run with an overcloud deployment.  I ran into issues weeks ago.  I
> suspect if we do hit something it will be CI related as Dan Price has been
> deploying the overcloud for a while now.  Dan I may need to review your
> latest doit.sh scripts to check for diffs in the CI.
>

What I would propose is switching the undercloud-containers job to use
the 'openstack undercloud install --use-heat' command and we switch
that to non-voting and see how it performs. Originally when we
discussed this I wanted that job voting my milestone 1. Milestone 2 is
next week so I'm very concerned at the state of this feature.  Do we
have updates and upgrades with the containerized undercloud being
tested anywhere in CI? That was one of items that I had mentioned[0]
as a requirement to do the switch during the queens cycle. What I
would really like to see is that we get those stable and then we can
work on actually testing overcloud deploys and the various scenarios
with the containerized undercloud.  If we update oooq to support
adding the --use-heat flag it would make testing all the scenarios
fairly trivial with a single patch and we would be able to see where
there are issues.

Thanks,
-Alex

[0] http://lists.openstack.org/pipermail/openstack-dev/2017-October/123065.html


> Thanks
>
>
> [1]
> http://logs.openstack.org/18/518118/6/check/tripleo-ci-centos-7-undercloud-oooq/73115d6/logs/undercloud/home/zuul/undercloud_install.log.txt.gz
> [2]
> http://logs.openstack.org/18/518118/6/check/tripleo-ci-centos-7-undercloud-oooq/73115d6/logs/undercloud/home/zuul/undercloud_reinstall.log.txt.gz#_2017-11-30_19_51_26
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] rename ovb jobs?

2017-12-01 Thread Emilien Macchi
Bogdan and Dmitry's suggestions are imho a bit too much and would lead
to very very (very) long names... Do we actually want that?

On Fri, Dec 1, 2017 at 2:02 AM, Sanjay Upadhyay  wrote:
>
>
> On Fri, Dec 1, 2017 at 2:17 PM, Bogdan Dobrelya  wrote:
>>
>> On 11/30/17 8:11 PM, Emilien Macchi wrote:
>>>
>>> A few months ago, we renamed ovb-updates to be
>>> tripleo-ci-centos-7-ovb-1ctlr_1comp_1ceph-featureset024.
>>> The name is much longer but it describes better what it's doing.
>>> We know it's a job with one controller, one compute and one storage
>>> node, deploying the quickstart featureset n°24.
>>>
>>> For consistency, I propose that we rename all OVB jobs this way.
>>> For example, tripleo-ci-centos-7-ovb-ha-oooq would become
>>> tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001
>>> etc.
>>>
>>> Any thoughts / feedback before we proceed?
>>> Before someone asks, I'm not in favor of renaming the multinode
>>> scenarios now, because they became quite familiar now, and it would
>>> confuse people to rename the jobs.
>>>
>>> Thanks,
>>>
>>
>> I'd like to see featuresets clarified in names as well. Just to bring the
>> main message, w/o going into the test matrix details, like
>> tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-ovn/ceph/k8s/tempest
>> whatever it is.
>>
>
> How is this looking?
>
> tripleo-ci/os/centos/7/ovb/ha/nodes/3ctrlr_1comp.yaml
> tripleo-ci/os/centos/7/ovb/ha/featureset/ovn_ceph_k8s_with-tempest.yaml
>
> I also think we should have clear demarcation of the oooq variables ie
> machine specific goes to nodes/* and feature related goes to featureset/*
>
> regards
> /sanjay
>
>
>>
>> --
>> Best regards,
>> Bogdan Dobrelya,
>> Irc #bogdando
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] [api] [sdks] [keystone] Sahara APIv2: service discovery

2017-12-01 Thread Monty Taylor

On 12/01/2017 05:04 AM, Luigi Toscano wrote:

On Friday, 1 December 2017 01:34:36 CET Monty Taylor wrote:


First and most importantly you need to update python-saharaclient to
make sure it can handle it an unversioned endpoint in the catalog (by
doing discovery) - and that if it finds an unversioned endpoint in the
catalog it knows to prepend project-id to the urls is sends. The
easiest/best way to do this it to make sure it's delegating version
discovery to keystoneauth ... I will be more than happy to help you get
that updated.

Then, for now, recommend that *new* deployments put the unversioned
endpoint into their catalog, but that existing deployments keep the v1
endpoint in the catalog even if they upgrade sahara to a version that
has v2 as well. (The full description of version discovery describes how
to get to a newer version even if an older version is in the catalog, so
people can opt-in to v2 if it's there with no trouble)

That gets us to a state where:

- existing deployments with users using v1 are not broken
- existing deployments that upgrade can have user's opt-in to v2 easily
- new deployments will have both v1 and v2 - but users who want to use
v1 will have to do so with a client that understands actually doing
discovery


Does it work even if we would like to keep v1 as default for a while? v2, at
least in the first release, will be marked as experimental; hopefully it
should stabilize soon, but still.


Totally. In the version discovery document returned by sahara, keep v1 
listed as "CURRENT" and list v2 as "EXPERIMENTAL". Then, when you're 
ready to declare v2 as the recommended API, change v1 to "SUPPORTED" and 
v1 to "CURRENT".



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] TripleO CI end of sprint status

2017-12-01 Thread Arx Cruz
Hello,

On November 29 we came the end of sprint using our new team structure [1],
and here’s the highlights:

Sprint Review:

The goal on this sprint was to reduce the tech debt generated by the other
sprints as a way to reduce the work of the Ruck and Rover.

We choose the most relevant cards in our tech debt list, and I am glad to
report that we were able to complete mostly of them. Since it was tech debt
cards, we set a goal of the cards we would like to complete, and as the
time permits, other cards.

As result, we have 10 cards completed, 4 cards that are being finished
(just pending review or comments updates) and 4 cards that remains in tech
debt.

One can see the results of the sprint via https://tinyurl.com/y8wwntvc

Tripleo CI community meeting


   - Saneax is working to introduce some update jobs in stable branch
  - Initially he wanted to introduce it in upstream, however, after
  discuss with the team, probably it’s best to have it in RDO
cloud since we
  have a more flexible timeout for the jobs
  - Master check/gate job blockers
   - TestVolumeBootPattern
  - Still work in progress, we have Daniel Alvarez working in debug the
  jobs.
   - OVB Migration https://trello.com/c/wGUUEqty
  - RDO Cloud upgrade from newton to ocata was blocked by
  https://bugs.launchpad.net/tripleo/+bug/1724328 which is now resolved
  thanks to Pradeep.
  - Checking with David Machado on the status of the RDO Cloud upgrade
  to Ocata
  - All upstream jobs are currently running on both RH1 and RDO Cloud
 - Sagi put together a nice chart displaying the pass/fail rates of
 OVB jobs in both environments
 -
 
https://trello-attachments.s3.amazonaws.com/57a843f924c8f76569579c8b/5a0b479898ccb207352b5d9f/c299f41da138a92aac3984298530a6d1/rdo-rh1-cloud.pdf
  - Looking for input on when to transition off RH1 and onto RDO Cloud
  in full
   - Promotion Status
  - Master 21 days since last promotion , 4 known issues
  - Pike 7 days since last promotion, 3 known issues
  - Ocata 17 days since last promotion, 1 known issue



Ruck and Rover

List of bugs that Ruck and Rover were working on:


   - https://bugs.launchpad.net/tripleo/+bug/1734928
   - ui_validate_simple is failing on master gate - logs are not collected
(related to tripleo-ci-centos-7-scenario001-multinode-oooq-container
  failure)
   - https://bugs.launchpad.net/tripleo/+bug/1731988
  - TestNetworkBasicOps.test_mtu_sized_frames timing out on pike
  promotion jobs
   - https://bugs.launchpad.net/tripleo/+bug/1733672
  - tripleo-ci-centos-7-scenario001-multinode-oooq-container is failing
  on deploying the overcloud - master release only ( related to
  tripleo-ci-centos-7-scenario001-multinode-oooq-container)
   - https://bugs.launchpad.net/tripleo/+bug/1732477
  -  Container deployment failing at overcloud-prep-containers (not
  logged by CI - fixed by CI)
   - https://bugs.launchpad.net/tripleo/+bug/1733983
  -  Tempest reports 'missing Worker 1!'
   - https://bugs.launchpad.net/tripleo/+bug/1734752
  -  Master containers build are failing with 'No package yum-axelget
  available'
   - https://bugs.launchpad.net/tripleo/+bug/1734709
  -  Master promotion featuresets005- 008 are failing overcloud deploy
  -  ''
  
/usr/share/openstack-tripleo-heat-templates/ci/environments/scenario00x-multinode.yaml"
  file not found
   - https://bugs.launchpad.net/tripleo/+bug/1734134
  -  Pike periodic promotion job multinode-1ctlr-featureset016 fail
  with error running docker 'gnocchi_db_sync' - rados.Rados.connect
  PermissionDeniedError: error connecting to the cluster
   - https://bugs.launchpad.net/tripleo/+bug/1733858
  - Upstream containers promotion fails with "unauthorized:
  authentication required" while pulling the images from RDO registry
   - https://bugs.launchpad.net/tripleo/+bug/1733598
  - newton jobs on rdo cloud fail with 'dlrn_hash_tag' is undefined
   - https://bugs.launchpad.net/tripleo/+bug/1733345
  - Master promotion: error creating the default Deployment Plan
  overcloud
   - https://bugs.launchpad.net/tripleo/+bug/1732706
  - tripleo ci / quickstart jobs have duplicate entries in /etc/hosts
   - https://bugs.launchpad.net/tripleo/+bug/173198
   
  - dstat files are not in the upstream tripleo logs in /var/log/extra
   - https://bugs.launchpad.net/tripleo/+bug/1731346
  - dlrn_hash_tag is undefined failing ovb jobs in pike/master promotion
   - https://bugs.launchpad.net/tripleo/+bug/1731456
  - Timed out CI jobs not collecting logs, "FAILED with status: 137"
  (not logged by CI but fixed by CI)
   - https://bugs.launchpad.net/tripleo/+bug/1734348
  - legacy-instack-undercloud-puppet-lint is failing with ERROR Unable
  to find playbook
   - 

[openstack-dev] [vitrage] Feedback on ability to 'suppress' alarms by type and/or resource in Vitrage

2017-12-01 Thread Waines, Greg
Hey,

I am interested in getting some feedback on a proposed blueprint for Vitrage.

BLUEPRINT:

TITLE: Add the ability to ‘suppress’ alarms by Alarm Type and/or Resource

When managing a cloud, there are situations where a particular alarm or a set 
of alarms from a particular resource are occurring frequently, however they are 
identifying issues that are not of concern, at least for the time being.  For 
example, new hardware is in the process of being installed and resulting in 
alarms to occur, or remote servers (e.g. NTP Servers) are unreliable and result 
in frequent connectivity alarms.   In these situations, these irrelevant alarms 
are cluttering the alarm displays and it would be helpful to be able to 
suppress these alarms.

Suppressed alarms would not be shown in Active Alarm lists or Historical Alarm 
lists, and would not be included in alarm counts.
There would be a CLI Option / Horizon Button, to enable looking at Alarms that 
are currently suppressed.
( i.e. the idea would be that suppressed alarms would still be tracked, they 
just would not be displayed by default)

Thoughts on usefulness ?



Questions on how to implement this in Vitrage

· from an end user’s point of view, alarms have the following fields

ovitrage_id (uuid) - unique identifier of an instance of an alarm

ovitrage_type (enum) - e.g. collectd, nagios, zabbix, vitrage, ...
  - really an identifier of the general 
entity reporting the alarm

oname (string) - the alarm description

ovitrage_resource_type (enum) - e.g. nova.instance, nova.host, port, ...

ovitrage_resource_id (uuid) - resource instance

ovitrage_aggregated_severity

ovitrage_operational_severity

oupdate_timestamp

·

· there definitely is a specific resource identifier in order to be 
able to suppress alarms from a particular resource

·

· BUT there doesn’t seem like there is a general alarm type field
i.e. that would classify the type of problem that’s occurring
e.g.

ocommunication failure with compute host

oloss-of-signal on port of compute host

oloss of connectivity with NTP Server

oCPU Threshold exceeded on compute host

oMemory Threshold exceeded on compute host

oFile System Threshold exceeded on compute host

oetc.

· ... which would be type/granularity of ‘Alarm Type’ that i would 
think the user would want to suppress alarms based on.

· i.e. it seems like the ‘name’ field is a combination of this general 
Alarm Type and details on the particular alarm.

·

· Any thoughts on adding a ‘vitrage_alarm_type (enum or short string)’ 
as a mechanism to identify the general type of problem or alarm being reported 
in order to address this ?

ocould be an optional field

obut we’d display in the alarm list

oand we’d use it as the mechanism to suppress alarms by ‘type’

 Let me know what you think ?


Greg.







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer] about workload partition

2017-12-01 Thread gordon chung


On 2017-12-01 05:03 AM, 李田清 wrote:
> Hello,
>       we test workload partition, and find it's much slower than not 
> using it.
>       After some review, we find that, after get samples from 
> notifications.sample
>       ceilometer unpacks them and sends them one by one to the pipe
>       ceilometer.pipe.*, this will make the consumer slow. Right now, 
> the rabbit_qos_prefetch_count to 1. If we sent it to 10, the connection 
> will be reset

currently, i believe rabbit_qos_prefetch_count will be set to whatever 
value you set batch_size to.

>       regularly. Under this pos, the consumer will be very slow in 
> workload partition. If you do not use workload partition, the messages 
> can all be consumer. If you use it, the messages in pipe will be piled 
> up more and more。

what is "pos"? i'm not sure it means the same thing to both of us... or 
well i guess it could :)

>      May be right now workload partition is not a good choice? Or any 
> suggestion?
> 

i'll give a two part answer but first i'll start with a question: what 
version of oslo.messaging do you have?

i see a performance drop as well but the reason for it is because of an 
oslo.messaging bug introduced into master/pike/ocata releases. more 
details can be found here: 
https://bugs.launchpad.net/oslo.messaging/+bug/1734788. we're working on 
backporting it. we've also done some work regarding performance/memory 
to shrink memory usage of partitioning in master[1].

with that said, there are only two scenarios where you should have 
partitioning enabled. if you have multiple notification agents AND:

1. you have transformations in your pipeline
2. you want to batch efficiently to gnocchi

if you don't have workload partitioning on, your transform metrics will 
probably be wrong or missing values. it also won't batch to gnocchi so 
you'll see a lot more http requests there.

so yes, you do have a choice to disable it, but the above is your tradeoff.

[1] 
https://review.openstack.org/#/q/topic:plugin+(status:open+OR+status:merged)+project:openstack/ceilometer

cheers,

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sahara] [api] [sdks] [keystone] Sahara APIv2: service discovery

2017-12-01 Thread Luigi Toscano
On Friday, 1 December 2017 01:34:36 CET Monty Taylor wrote:

> First and most importantly you need to update python-saharaclient to
> make sure it can handle it an unversioned endpoint in the catalog (by
> doing discovery) - and that if it finds an unversioned endpoint in the
> catalog it knows to prepend project-id to the urls is sends. The
> easiest/best way to do this it to make sure it's delegating version
> discovery to keystoneauth ... I will be more than happy to help you get
> that updated.
> 
> Then, for now, recommend that *new* deployments put the unversioned
> endpoint into their catalog, but that existing deployments keep the v1
> endpoint in the catalog even if they upgrade sahara to a version that
> has v2 as well. (The full description of version discovery describes how
> to get to a newer version even if an older version is in the catalog, so
> people can opt-in to v2 if it's there with no trouble)
> 
> That gets us to a state where:
> 
> - existing deployments with users using v1 are not broken
> - existing deployments that upgrade can have user's opt-in to v2 easily
> - new deployments will have both v1 and v2 - but users who want to use
> v1 will have to do so with a client that understands actually doing
> discovery

Does it work even if we would like to keep v1 as default for a while? v2, at 
least in the first release, will be marked as experimental; hopefully it 
should stabilize soon, but still.

Ciao
-- 
Luigi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] containerized undercloud update

2017-12-01 Thread Bogdan Dobrelya

On 11/30/17 10:36 PM, Wesley Hayutin wrote:

Greetings,

Just wanted to share some progress with the containerized undercloud work.
Ian pushed some of the patches along and we now have a successful 
undercloud install with containers.


The initial undercloud install works [1]
The idempotency check failed where we reinstall the undercloud [2]

Question: Do we expect the reinstallation to work at this point? Should 
the check be turned off?


I will try it w/o the idempotency check, I suspect I will run into 
errors in a full run with an overcloud deployment.  I ran into issues 


I've been deploying this way my dev envs, which is deployed-servers for 
overcloud nodes, like external deployments with configs download. Feel 
free to invite me for some brain storming as well :)


Yeah, and kudos! Well done! I'm happy to see undercloud containers 
working better and getting adopted for CI/devs.


weeks ago.  I suspect if we do hit something it will be CI related as 
Dan Price has been deploying the overcloud for a while now.  Dan I may 
need to review your latest doit.sh scripts to check for diffs in the CI.


Thanks


[1] 
http://logs.openstack.org/18/518118/6/check/tripleo-ci-centos-7-undercloud-oooq/73115d6/logs/undercloud/home/zuul/undercloud_install.log.txt.gz
[2] 
http://logs.openstack.org/18/518118/6/check/tripleo-ci-centos-7-undercloud-oooq/73115d6/logs/undercloud/home/zuul/undercloud_reinstall.log.txt.gz#_2017-11-30_19_51_26



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] containerized undercloud update

2017-12-01 Thread Bogdan Dobrelya

On 11/30/17 10:36 PM, Wesley Hayutin wrote:

Greetings,

Just wanted to share some progress with the containerized undercloud work.
Ian pushed some of the patches along and we now have a successful 
undercloud install with containers.


The initial undercloud install works [1]
The idempotency check failed where we reinstall the undercloud [2]

Question: Do we expect the reinstallation to work at this point? Should 
the check be turned off?


Yeah, there is a bug for that [0]. Not critical to fix, though nice to 
have for developers. I'm used to deploy with undercloud containers, and 
it's a pain to do a full teardown and reinstall for each change being 
tested.


By the way, somewhat related, I have a PoC for undercloud containers 
all-in-one [1], by quickstart off-road. And a few 'enabler' bug-fixes 
[2],[3],[4], JFYI and review please.


I think all-in-one uc may be useful either for CI, or dev cases. Like 
for those who want to deploy *things* on top of openstack, yet suffering 
from healing devstack and searching alternatives, like packstack et al. 
So they may want to switch suffering from healing tripleo (undercloud 
containers) instead.


[0] https://bugs.launchpad.net/tripleo/+bug/1698349
[1] https://github.com/bogdando/oooq-warp/blob/master/rdocloud-guide.md
[2] https://review.openstack.org/#/c/524114/
[3] https://review.openstack.org/#/c/524133/
[4] https://review.openstack.org/#/c/524187



I will try it w/o the idempotency check, I suspect I will run into 
errors in a full run with an overcloud deployment.  I ran into issues 
weeks ago.  I suspect if we do hit something it will be CI related as 
Dan Price has been deploying the overcloud for a while now.  Dan I may 
need to review your latest doit.sh scripts to check for diffs in the CI.


Thanks


[1] 
http://logs.openstack.org/18/518118/6/check/tripleo-ci-centos-7-undercloud-oooq/73115d6/logs/undercloud/home/zuul/undercloud_install.log.txt.gz
[2] 
http://logs.openstack.org/18/518118/6/check/tripleo-ci-centos-7-undercloud-oooq/73115d6/logs/undercloud/home/zuul/undercloud_reinstall.log.txt.gz#_2017-11-30_19_51_26



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Technical Committee Status update, December 1st

2017-12-01 Thread Thierry Carrez
Hi!

This is the weekly summary of Technical Committee initiatives. You can
find the full list of all open topics (updated twice a week) at:

https://wiki.openstack.org/wiki/Technical_Committee_Tracker

If you are working on something (or plan to work on something) that is
not on the tracker, feel free to add to it !


== Recently-approved changes ==

* Stable policy only applies to main OpenStack components [1][2]
* Update team corporate diversity tags [3]
* Add designate to the tc:approved-release tag [4]
* PTI change: Remove releasenotes/requirements.txt [5]
* New repos: networking-generic-switch, governance-sigs
* Retired repositories: ceilometerclient

[1] https://review.openstack.org/#/c/521049/
[2] https://review.openstack.org/#/c/519685/
[3] https://review.openstack.org/#/c/522536/
[4] https://review.openstack.org/#/c/521587/
[5] https://review.openstack.org/#/c/521398/

A number of small changes got approved this week. The most significant
is probably the realization that the stable:follows-policy tag is meant
to provide information about which changes are OK in stable branches for
components of an OpenStack cloud. It does not apply so well to
downstream packaging or lifecycle management tools, where the
expectations are different. Those should therefore not be expected to
adopt it as-is. You can see the new tag definition here:

https://governance.openstack.org/tc/reference/tags/stable_follows-policy.html


== Voting in progress ==

Monty Taylor's proposal to rename the 'Shade' team to 'OpenStackSDK' has
already reached majority support. It will be approved next Tuesday
unless new objections are posted:

https://review.openstack.org/523520

Doug Hellmann proposes to round up our top-5 "help needed" list with
goal champions. His proposal is still short of a couple of votes:

https://review.openstack.org/510656

My proposal to officialize the election officials is also still missing
a coupe of votes to pass:

https://review.openstack.org/521062

A number of cleanups have also been proposed and voting is in progress
there:

* Tags are either applied to deliverables or teams [6]
* Stop linking to documentation from governance [7]
* Remove unused docs:follows-policy tag [8]

[6] https://review.openstack.org/523886
[7] https://review.openstack.org/523195
[8] https://review.openstack.org/524217


== Under discussion ==

Graham Hayes proposed various options to clarify how the testing of
interoperability programs should be organized, in the age of add-on
trademark programs. It is a difficult trade-off between the benefits of
centralizing reviews and decentralizing reviews for that specific area.
Please chime in on the review:

https://review.openstack.org/521602

Matt Treinish proposed an update to the Python PTI for tests to be
specific and explicit. Wider community input is needed on that topic.
Please review at:

https://review.openstack.org/519751

The "top-5 help wanted list" assumes there will always be 5 items, and
the name is a bit of a mouthful. Naming is hard. Current proposal is to
call is the "help most needed" list. If you prefer your bikesheds
painted in blue, please comment at:

https://review.openstack.org/520619

Emilien Macchi officially launched the goal proposing season for the
Rocky cycle in a thread at:

http://lists.openstack.org/pipermail/openstack-dev/2017-November/124976.html

We already have one proposal on the docket (Storyboard migration),
thanks to Kendall Nelson. Please chime in on whether it's a good idea at:

https://review.openstack.org/513875


== TC member actions for the coming week(s) ==

A few of us will be busy next week doing active outreach to the
Kubernetes community during KubeCon in Austin. Nothing else stands out
at this point.


== Office hours ==

To be more inclusive of all timezones and more mindful of people for
which English is not the primary language, the Technical Committee
dropped its dependency on weekly meetings. So that you can still get
hold of TC members on IRC, we instituted a series of office hours on
#openstack-tc:

* 09:00 UTC on Tuesdays
* 01:00 UTC on Wednesdays
* 15:00 UTC on Thursdays

For the coming week, I expect more discussion around interop testing and
tempest.

Cheers,

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer] about workload partition

2017-12-01 Thread 李田清
Hello,
 we test workload partition, and find it's much slower than not using it.
 After some review, we find that, after get samples from 
notifications.sample
 ceilometer unpacks them and sends them one by one to the pipe 
 ceilometer.pipe.*, this will make the consumer slow. Right now, the 
rabbit_qos_prefetch_count to 1. If we sent it to 10, the connection will be 
reset
 regularly. Under this pos, the consumer will be very slow in workload 
partition. If you do not use workload partition, the messages can all be 
consumer. If you use it, the messages in pipe will be piled up more and more。
May be right now workload partition is not a good choice? Or any suggestion?__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] rename ovb jobs?

2017-12-01 Thread Sanjay Upadhyay
On Fri, Dec 1, 2017 at 2:17 PM, Bogdan Dobrelya  wrote:

> On 11/30/17 8:11 PM, Emilien Macchi wrote:
>
>> A few months ago, we renamed ovb-updates to be
>> tripleo-ci-centos-7-ovb-1ctlr_1comp_1ceph-featureset024.
>> The name is much longer but it describes better what it's doing.
>> We know it's a job with one controller, one compute and one storage
>> node, deploying the quickstart featureset n°24.
>>
>> For consistency, I propose that we rename all OVB jobs this way.
>> For example, tripleo-ci-centos-7-ovb-ha-oooq would become
>> tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001
>> etc.
>>
>> Any thoughts / feedback before we proceed?
>> Before someone asks, I'm not in favor of renaming the multinode
>> scenarios now, because they became quite familiar now, and it would
>> confuse people to rename the jobs.
>>
>> Thanks,
>>
>>
> I'd like to see featuresets clarified in names as well. Just to bring the
> main message, w/o going into the test matrix details, like
> tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-ovn/ceph/k8s/tempest
> whatever it is.
>
>
How is this looking?

tripleo-ci/os/centos/7/ovb/ha/nodes/3ctrlr_1comp.yaml
tripleo-ci/os/centos/7/ovb/ha/featureset/ovn_ceph_k8s_with-tempest.yaml

I also think we should have clear demarcation of the oooq variables ie
machine specific goes to nodes/* and feature related goes to featureset/*

regards
/sanjay



> --
> Best regards,
> Bogdan Dobrelya,
> Irc #bogdando
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [cinder] nova cannot create instance snapshot after ocata upgrade

2017-12-01 Thread Kim-Norman Sahm
after removing these options from the [keystone_authtoken] section in
cinder.conf snapshots are working:

service_token_roles_required=True
service_token_roles=service



Am Freitag, den 01.12.2017, 10:23 +0100 schrieb Kim-Norman Sahm:
> this is my cinder section of the nova.conf
> 
> [cinder]
> os_region_name=myregion
> cross_az_attach=False
> catalog_info=volumev3:cinderv3:internalURL
> 
> 
> i don't find anything about cinder authentication in the nova config
> options. https://docs.openstack.org/ocata/config-reference/compute/co
> nf
> ig-options.html
> 
> 
> 
> Am Donnerstag, den 30.11.2017, 11:30 -0600 schrieb Matt Riedemann:
> > 
> > On 11/30/2017 9:30 AM, Kim-Norman Sahm wrote:
> > > 
> > > 
> > > after upgrade openstack newton -> ocata i cannot create snapshots
> > > of my
> > > instances.
> > > 
> > > if i try to create a snapshot of a instance horizon get this
> > > error:
> > > "Error: Unable to create snapshot."
> > > create a snapshot of a cinder volume  via openstackcli is
> > > working.
> > > 
> > > nova.log
> > > 
> > > 2017-11-30 15:19:57.875 93 DEBUG cinderclient.v3.client [req-
> > > 5820c19b-
> > > fb11-43a2-8513-0782540b3d32 c756af2957c4447eafc4cef39cdb79e5
> > > 469dc3d300df4d41aaea00db572043ae - default default] REQ: curl -g
> > > -i
> > > -X
> > > GET https://cinder:8776/v3/469dc3d300df4d41aaea00db572043ae/volum
> > > es
> > > /c67
> > > b5cf3-0beb-4efa-9177-d2b6498185fb -H "X-Service-Token:
> > > {SHA1}29a46cd87988e2bb905dbd3e796401aa23dff1a5" -H "User-Agent:
> > > python-
> > > cinderclient" -H "Accept: application/json" -H "X-Auth-Token:
> > > {SHA1}524061f0ab91e64ed6241e437792346f90df856e" _http_log_request
> > > /usr/lib/python2.7/dist-packages/keystoneauth1/session.py:347
> > > 2017-11-30 15:19:57.890 92 INFO nova.osapi_compute.wsgi.server
> > > [req-
> > > d83d5b73-fd24-406c-ad6b-feed6a40bfae
> > > c756af2957c4447eafc4cef39cdb79e5
> > > 469dc3d300df4d41aaea00db572043ae - default default] 10.78.21.2
> > > "GET
> > > /v2.1/flavors/203/os-extra_specs HTTP/1.1" status: 200 len: 448
> > > time:
> > > 0.0326798
> > > 2017-11-30 15:19:58.148 93 DEBUG cinderclient.v3.client [req-
> > > 5820c19b-
> > > fb11-43a2-8513-0782540b3d32 c756af2957c4447eafc4cef39cdb79e5
> > > 469dc3d300df4d41aaea00db572043ae - default default] RESP: [401]
> > > Date:
> > > Thu, 30 Nov 2017 15:19:57 GMT Server: Apache/2.4.18 (Ubuntu) x-
> > > openstack-request-id: req-22378faa-880b-4a80-a83e-41936741839e
> > > WWW-
> > > Authenticate: Keystone uri='https://keystone:5000/' Content-
> > > Length: 
> > > 114
> > > Content-Type: application/json
> > > RESP BODY: {"error": {"message": "The request you have made
> > > requires
> > > authentication.", "code": 401, "title": "Unauthorized"}}
> > >   _http_log_response /usr/lib/python2.7/dist-
> > > packages/keystoneauth1/session.py:395
> > > 2017-11-30 15:19:58.149 93 DEBUG cinderclient.v3.client [req-
> > > 5820c19b-
> > > fb11-43a2-8513-0782540b3d32 c756af2957c4447eafc4cef39cdb79e5
> > > 469dc3d300df4d41aaea00db572043ae - default default] GET call to
> > > cinderv3 for https://cinder:8776/v3/469dc3d300df4d41aaea00db57204
> > > 3a
> > > e/vo
> > > lumes/c67b5cf3-0beb-4efa-9177-d2b6498185fb used request id req-
> > > 22378faa-880b-4a80-a83e-41936741839e request
> > > /usr/lib/python2.7/dist-
> > > packages/keystoneauth1/session.py:640
> > > 2017-11-30 15:19:58.157 93 DEBUG cinderclient.v3.client [req-
> > > 5820c19b-
> > > fb11-43a2-8513-0782540b3d32 c756af2957c4447eafc4cef39cdb79e5
> > > 469dc3d300df4d41aaea00db572043ae - default default] RESP: [401]
> > > Date:
> > > Thu, 30 Nov 2017 15:19:58 GMT Server: Apache/2.4.18 (Ubuntu) x-
> > > openstack-request-id: req-02ebac9f-794a-46f4-85b2-0e429a1785cf
> > > WWW-
> > > Authenticate: Keystone uri='https://keystone:5000/' Content-
> > > Length: 
> > > 114
> > > Content-Type: application/json
> > > RESP BODY: {"error": {"message": "The request you have made
> > > requires
> > > authentication.", "code": 401, "title": "Unauthorized"}}
> > >   _http_log_response /usr/lib/python2.7/dist-
> > > packages/keystoneauth1/session.py:395
> > > 2017-11-30 15:19:58.158 93 ERROR nova.api.openstack.extensions
> > > [req-
> > > 5820c19b-fb11-43a2-8513-0782540b3d32
> > > c756af2957c4447eafc4cef39cdb79e5
> > > 469dc3d300df4d41aaea00db572043ae - default default] Unexpected
> > > exception in API method
> > > 2017-11-30 15:19:58.158 93 ERROR nova.api.openstack.extensions
> > > Traceback (most recent call last):
> > > 2017-11-30 15:19:58.158 93 ERROR
> > > nova.api.openstack.extensions   File
> > > "/usr/lib/python2.7/dist-
> > > packages/nova/api/openstack/extensions.py",
> > > line 338, in wrapped
> > > 2017-11-30 15:19:58.158 93 ERROR
> > > nova.api.openstack.extensions return f(*args, **kwargs)
> > > 2017-11-30 15:19:58.158 93 ERROR
> > > nova.api.openstack.extensions   File
> > > "/usr/lib/python2.7/dist-packages/nova/api/openstack/common.py",
> > > line
> > > 359, in inner
> > > 2017-11-30 15:19:58.158 93 

Re: [openstack-dev] [nova] [cinder] nova cannot create instance snapshot after ocata upgrade

2017-12-01 Thread Kim-Norman Sahm
this is my cinder section of the nova.conf

[cinder]
os_region_name=myregion
cross_az_attach=False
catalog_info=volumev3:cinderv3:internalURL


i don't find anything about cinder authentication in the nova config
options. https://docs.openstack.org/ocata/config-reference/compute/conf
ig-options.html



Am Donnerstag, den 30.11.2017, 11:30 -0600 schrieb Matt Riedemann:
> On 11/30/2017 9:30 AM, Kim-Norman Sahm wrote:
> > 
> > after upgrade openstack newton -> ocata i cannot create snapshots
> > of my
> > instances.
> > 
> > if i try to create a snapshot of a instance horizon get this error:
> > "Error: Unable to create snapshot."
> > create a snapshot of a cinder volume  via openstackcli is working.
> > 
> > nova.log
> > 
> > 2017-11-30 15:19:57.875 93 DEBUG cinderclient.v3.client [req-
> > 5820c19b-
> > fb11-43a2-8513-0782540b3d32 c756af2957c4447eafc4cef39cdb79e5
> > 469dc3d300df4d41aaea00db572043ae - default default] REQ: curl -g -i
> > -X
> > GET https://cinder:8776/v3/469dc3d300df4d41aaea00db572043ae/volumes
> > /c67
> > b5cf3-0beb-4efa-9177-d2b6498185fb -H "X-Service-Token:
> > {SHA1}29a46cd87988e2bb905dbd3e796401aa23dff1a5" -H "User-Agent:
> > python-
> > cinderclient" -H "Accept: application/json" -H "X-Auth-Token:
> > {SHA1}524061f0ab91e64ed6241e437792346f90df856e" _http_log_request
> > /usr/lib/python2.7/dist-packages/keystoneauth1/session.py:347
> > 2017-11-30 15:19:57.890 92 INFO nova.osapi_compute.wsgi.server
> > [req-
> > d83d5b73-fd24-406c-ad6b-feed6a40bfae
> > c756af2957c4447eafc4cef39cdb79e5
> > 469dc3d300df4d41aaea00db572043ae - default default] 10.78.21.2 "GET
> > /v2.1/flavors/203/os-extra_specs HTTP/1.1" status: 200 len: 448
> > time:
> > 0.0326798
> > 2017-11-30 15:19:58.148 93 DEBUG cinderclient.v3.client [req-
> > 5820c19b-
> > fb11-43a2-8513-0782540b3d32 c756af2957c4447eafc4cef39cdb79e5
> > 469dc3d300df4d41aaea00db572043ae - default default] RESP: [401]
> > Date:
> > Thu, 30 Nov 2017 15:19:57 GMT Server: Apache/2.4.18 (Ubuntu) x-
> > openstack-request-id: req-22378faa-880b-4a80-a83e-41936741839e WWW-
> > Authenticate: Keystone uri='https://keystone:5000/' Content-Length: 
> > 114
> > Content-Type: application/json
> > RESP BODY: {"error": {"message": "The request you have made
> > requires
> > authentication.", "code": 401, "title": "Unauthorized"}}
> >   _http_log_response /usr/lib/python2.7/dist-
> > packages/keystoneauth1/session.py:395
> > 2017-11-30 15:19:58.149 93 DEBUG cinderclient.v3.client [req-
> > 5820c19b-
> > fb11-43a2-8513-0782540b3d32 c756af2957c4447eafc4cef39cdb79e5
> > 469dc3d300df4d41aaea00db572043ae - default default] GET call to
> > cinderv3 for https://cinder:8776/v3/469dc3d300df4d41aaea00db572043a
> > e/vo
> > lumes/c67b5cf3-0beb-4efa-9177-d2b6498185fb used request id req-
> > 22378faa-880b-4a80-a83e-41936741839e request
> > /usr/lib/python2.7/dist-
> > packages/keystoneauth1/session.py:640
> > 2017-11-30 15:19:58.157 93 DEBUG cinderclient.v3.client [req-
> > 5820c19b-
> > fb11-43a2-8513-0782540b3d32 c756af2957c4447eafc4cef39cdb79e5
> > 469dc3d300df4d41aaea00db572043ae - default default] RESP: [401]
> > Date:
> > Thu, 30 Nov 2017 15:19:58 GMT Server: Apache/2.4.18 (Ubuntu) x-
> > openstack-request-id: req-02ebac9f-794a-46f4-85b2-0e429a1785cf WWW-
> > Authenticate: Keystone uri='https://keystone:5000/' Content-Length: 
> > 114
> > Content-Type: application/json
> > RESP BODY: {"error": {"message": "The request you have made
> > requires
> > authentication.", "code": 401, "title": "Unauthorized"}}
> >   _http_log_response /usr/lib/python2.7/dist-
> > packages/keystoneauth1/session.py:395
> > 2017-11-30 15:19:58.158 93 ERROR nova.api.openstack.extensions
> > [req-
> > 5820c19b-fb11-43a2-8513-0782540b3d32
> > c756af2957c4447eafc4cef39cdb79e5
> > 469dc3d300df4d41aaea00db572043ae - default default] Unexpected
> > exception in API method
> > 2017-11-30 15:19:58.158 93 ERROR nova.api.openstack.extensions
> > Traceback (most recent call last):
> > 2017-11-30 15:19:58.158 93 ERROR
> > nova.api.openstack.extensions   File
> > "/usr/lib/python2.7/dist-
> > packages/nova/api/openstack/extensions.py",
> > line 338, in wrapped
> > 2017-11-30 15:19:58.158 93 ERROR
> > nova.api.openstack.extensions return f(*args, **kwargs)
> > 2017-11-30 15:19:58.158 93 ERROR
> > nova.api.openstack.extensions   File
> > "/usr/lib/python2.7/dist-packages/nova/api/openstack/common.py",
> > line
> > 359, in inner
> > 2017-11-30 15:19:58.158 93 ERROR
> > nova.api.openstack.extensions return f(*args, **kwargs)
> > 2017-11-30 15:19:58.158 93 ERROR
> > nova.api.openstack.extensions   File
> > "/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py",
> > line 108, in wrapper
> > 2017-11-30 15:19:58.158 93 ERROR
> > nova.api.openstack.extensions return func(*args, **kwargs)
> > 2017-11-30 15:19:58.158 93 ERROR
> > nova.api.openstack.extensions   File
> > "/usr/lib/python2.7/dist-packages/nova/api/validation/__init__.py",
> > line 108, in wrapper
> > 

Re: [openstack-dev] [release] Release countdown for week R-12, December 2-8

2017-12-01 Thread Dmitry Tantsur

On 11/30/2017 02:40 PM, Sean McGinnis wrote:

Development Focus
-

The Queens-2 milestone deadline is December 7th. All projects with specific
deadlines should be wrapping them up in time to submit the release request
before the end of day on the 7th.

General Information
---

Membership freeze coincides with milestone 2 [0]. This means projects that have
not done a release yet must do so for the next two milestones to be included in
the Queens release.

[0] https://releases.openstack.org/queens/schedule.html#q-mf

The following libraries have not done a release yet in the queens cycle:
 
ceilometermiddleware

django_openstack_auth
glance-store
heat-translator
ldappool
mistral-lib
os-client-config
os-vif
osc-lib
pycadf
python-aodhclient
python-barbicanclient
python-ceilometerclient
python-cloudkittyclient
python-congressclient
python-designateclient
python-freezerclient
python-glanceclient
python-ironic-inspector-client


Sorry, we had to get some big changes in. We're ready to request a release on 
Monday.



python-karborclient
python-keystoneclient
python-magnumclient
python-mistralclient
python-muranoclient
python-neutronclient
python-novaclient
python-octaviaclient
python-pankoclient
python-searchlightclient
python-senlinclient
python-solumclient
python-swiftclient
python-tackerclient
python-tricircleclient
python-vitrageclient
python-zaqarclient
requestsexceptions
tosca-parser

For library-only projects, please be aware of the membership freeze mentioned
above.

Remember that there are client and non-client library freezes for the release
starting mid-January.

If there are any questions about preparing a release by the 7th, please come
talk to us in #openstack-releases.

Upcoming Deadlines & Dates
--

Queens-2 Milestone: December 7
Final non-client library release deadline: January 18
Final client library release deadline: January 25
Queens-3 Milestone: January 25
Rocky PTG in Dublin: Week of February 26, 2018




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] rename ovb jobs?

2017-12-01 Thread Dmitry Tantsur

On 11/30/2017 08:11 PM, Emilien Macchi wrote:

A few months ago, we renamed ovb-updates to be
tripleo-ci-centos-7-ovb-1ctlr_1comp_1ceph-featureset024.
The name is much longer but it describes better what it's doing.
We know it's a job with one controller, one compute and one storage
node, deploying the quickstart featureset n°24.

For consistency, I propose that we rename all OVB jobs this way.
For example, tripleo-ci-centos-7-ovb-ha-oooq would become
tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001
etc.


Can we please include "introspection" (or any shortening of it) in the name of 
the jobs that cover it? That would simplify the life at least for me :)




Any thoughts / feedback before we proceed?
Before someone asks, I'm not in favor of renaming the multinode
scenarios now, because they became quite familiar now, and it would
confuse people to rename the jobs.

Thanks,




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Policy Goal Queens-2 Update

2017-12-01 Thread Adam Heczko
AFAIR there was attempt to push oslo.policy into Swift but it looks like
the patch was abandoned.
https://review.openstack.org/#/c/149930/

On Fri, Dec 1, 2017 at 8:04 AM, hie...@vn.fujitsu.com  wrote:

> FYI, I have updated the topic for Heat's works [1]. And finally no more
> projects in 'Not Started' list. :-)
>
> [1]. https://review.openstack.org/#/q/status:open+project:
> openstack/heat+branch:master+topic:policy-and-docs-in-code
>
> Regards,
> Hieu
>
> > -Original Message-
> > From: Lance Bragstad [mailto:lbrags...@gmail.com]
> > Sent: Friday, December 01, 2017 12:01 PM
> > To: OpenStack Development Mailing List (not for usage questions)
>  > d...@lists.openstack.org>
> > Subject: Re: [openstack-dev] [all] [tc] Policy Goal Queens-2 Update
> >
> >
> >
> > On 11/30/2017 07:00 PM, hie...@vn.fujitsu.com wrote:
> > > Lance,
> > >
> > >>> For the Swift project, I don't see oslo.policy in requirements.txt
> > >>> for now, then not sure they need to implement policy in code and the
> > >>> we got the same
> > >> thing with Solum.
> > >> So does that mean these can be removed as well? I'm wondering if
> > >> there is an official process here, or just a simple sign-off from a
> project maintainer?
> > > Swift did not use oslo.policy and use their own mechanism instead, so
> I guess we
> > can remove Swift along with remaining networking-* plugins as well.
> > >
> > > BTW, ceilometer had already deprecated and removed ceilometer API from
> > > Q, thus we can also remove ceilometer from the list too. [1]
> > >
> > > I have created PR regarding all above changes in [2].
> > Merged. Thanks for looking into this. New results should be available in
> the burndown
> > chart.
> > > Thanks,
> > > Hieu.
> > >
> > > [1].
> > > https://github.com/openstack/ceilometer/commit/d881dd52289d453b9f9d94c
> > > 7c32c0672a70a8064 [2].
> > > https://github.com/lbragstad/openstack-doc-migration-burndown/pull/1
> > >
> > >
> > >> -Original Message-
> > >> From: Lance Bragstad [mailto:lbrags...@gmail.com]
> > >> Sent: Thursday, November 30, 2017 10:41 PM
> > >> To: OpenStack Development Mailing List (not for usage questions)
> > >> 
> > >> Subject: Re: [openstack-dev] [all] [tc] Policy Goal Queens-2 Update
> > >>
> > >>
> > >>
> > >> On 11/29/2017 09:13 PM, da...@vn.fujitsu.com wrote:
> > >>> Hi all,
> > >>>
> > >>> I just want to share some related things to anyone are interested in.
> > >>>
> > >>> For the Neutron projects, I have discussed with them[1] but it is
> > >>> not really started, they want to consider more about all of
> > >>> networking projects before and I'm still waiting for the feedback to
> > >>> define the right way to
> > >> implement policy-in-code for networking projects.
> > >>> For the other extensions of Neutron, we got some
> > >>> recommendations[2][3] that we no need to implement policy-in-code
> > >>> into those projects because we already register policy in Neutron,
> > >>> so I think we can remove neutron-
> > >> fwaas, neutron-dynamic-routing, neutron-lib or even other networking
> > >> plugins out of "Not Started" list.
> > >> Awesome, thanks for the update! I've gone ahead and removed these
> > >> from the burndown chart [0]. Let me know if there are any others that
> > >> fall into this category and I'll get things updated in the tracking
> tool.
> > >>
> > >> [0]
> > >> https://github.com/lbragstad/openstack-doc-migration-
> > >> burndown/commit/f34c2f56692230f104354240bf0e4378dc0fea82
> > >>> For the Swift project, I don't see oslo.policy in requirements.txt
> > >>> for now, then not sure they need to implement policy in code and the
> > >>> we got the same
> > >> thing with Solum.
> > >> So does that mean these can be removed as well? I'm wondering if
> > >> there is an official process here, or just a simple sign-off from a
> project maintainer?
> > >>> [1]
> > >>> http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23opens
> > >>> ta
> > >>> ck-neutron.2017-10-31.log.html [2]
> > >>> http://eavesdrop.openstack.org/irclogs/%23openstack-lbaas/%23opensta
> > >>> ck
> > >>> -lbaas.2017-10-06.log.html#t2017-10-06T02:50:10
> > >>> [3] https://review.openstack.org/#/c/509389/
> > >>>
> > >>> Dai
> > >>>
> > >>>
> > >>
> > ___
> > >> ___
> > >>>  OpenStack Development Mailing List (not for usage questions)
> > >>> Unsubscribe:
> > >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > ___
> > ___
> > >  OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > ___
> > ___
> > OpenStack 

Re: [openstack-dev] [tripleo] rename ovb jobs?

2017-12-01 Thread Bogdan Dobrelya

On 11/30/17 8:11 PM, Emilien Macchi wrote:

A few months ago, we renamed ovb-updates to be
tripleo-ci-centos-7-ovb-1ctlr_1comp_1ceph-featureset024.
The name is much longer but it describes better what it's doing.
We know it's a job with one controller, one compute and one storage
node, deploying the quickstart featureset n°24.

For consistency, I propose that we rename all OVB jobs this way.
For example, tripleo-ci-centos-7-ovb-ha-oooq would become
tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001
etc.

Any thoughts / feedback before we proceed?
Before someone asks, I'm not in favor of renaming the multinode
scenarios now, because they became quite familiar now, and it would
confuse people to rename the jobs.

Thanks,



I'd like to see featuresets clarified in names as well. Just to bring 
the main message, w/o going into the test matrix details, like 
tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-ovn/ceph/k8s/tempest 
whatever it is.


--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev