[openstack-dev] [tc] [all] TC Report 44

2017-10-31 Thread Chris Dent


There's been a fair bit of various TC discussion but as I'm packing for my
rather tortured journey to
[Sydney](https://www.openstack.org/summit/sydney-2017/) and preparing some last
minute presentation materials, just a short TC Report this week, made up of
links to topics in the IRC logs:

* [Can openstack do 
self-healing?](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-10-24.log.html#t2017-10-24T21:37:33)
  This continues into the [following
  
day](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-10-25.log.html#t2017-10-25T00:57:06).
  This discussion sort of asks "what are we actually trying to do
  here?" and seems to indicate there's a lot of missing functionality,
  depending on your use case and perspective.

* [Planning Sydney and Dublin meetings with the
  
board](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-10-25.log.html#t2017-10-25T13:36:27).

* [More board meeting planning, and other summit
  
planning](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-10-26.log.html#t2017-10-26T13:08:15).

I will attempt to take notes at the board meeting and report them
here. There will be no TC report next week due to summit. Nor the week
after as I'll still be in Sydney, so expect the next one 21st of
November.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [placement] resource providers update 40

2017-10-27 Thread Chris Dent
nstack.org/#/c/513834/
  a refactor to a bit of db/test_resource_provider.py

* https://review.openstack.org/#/q/topic:bug/1724613
  demo test of https://bugs.launchpad.net/nova/+bug/1724613
  and https://bugs.launchpad.net/nova/+bug/1724633

# End

Thanks for reading this far. If you want a prize you get one if you
help to remove something from this list without also making it longer.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-sig/news

2017-10-26 Thread Chris Dent


Greetings OpenStack community,

Another short meeting this week. Participants have very busy radar; can't see for the 
trees, nor the mixed metaphors. The main topic of discussion is the work required to 
prepare for summit, where there will be an API-SIG forum session [5]. Also a bug has been 
created to remind us of the need to create a "changes-since" guideline [6].

# Newly Published Guidelines

None this week.

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

None this week

# Guidelines Currently Under Review [3]

* A (shrinking) suite of several documents about doing version and service 
discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API SIG about APIs that you are 
developing or changing, please address your concerns in an email to the OpenStack 
developer mailing list[1] with the tag "[api]" in the subject. In your email, 
you should include any relevant reviews, links, and comments to help guide the discussion 
of the specific challenge you are facing.

To learn more about the API SIG mission and the work we do, see our wiki page 
[4] and guidelines [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] https://wiki.openstack.org/wiki/API_SIG
[5] 
https://www.openstack.org/summit/sydney-2017/summit-schedule/events/20442/api-sig-feedback-and-discussion-session
[6] https://bugs.launchpad.net/openstack-api-wg/+bug/1727725

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_sig/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [elections] Technical Committee Election Results

2017-10-25 Thread Chris Dent

On Tue, 24 Oct 2017, Tony Breeds wrote:


On Mon, Oct 23, 2017 at 09:35:34AM +0100, Jean-Philippe Evrard wrote:


I agree, we should care about not repeating this Pike trend. It looks
like Queens is better in terms of turnout (see the amazing positive
delta!). However, I can't help but noticing that the trend for
turnouts is slowly reducing (excluding some outliers) since the
beginning of these stats.


Yup, the table makes that pretty visible.


I think we can't really make much in the way of conclusions about
the turnout data without comparing it with contributor engagement in
general. If many of the eligible voters have only barely crossed the
eligibility threshold (e.g., one commit) it's probably not
reasonable to expect them to care much about TC elections. We've
talked quite a bit lately that "casual contribution" is a growth
area.

A possibly meaningful correlation may be eligible voters to PTG
attendance to turnout, or before the PTG, number of people who got a
free pass to summit, chose to use it, and voters.

Dunno. Obviously it would be great if more people voted.


Me? No ;P  I do think we need to work out *why* turnout is attending
before determining how to correct it.  I don't really think that we can
get that information though.  Community member that aren't engaged
enough to participate in the election(s) are also unlikely to
participate in a survey askign why they didn't participate ;P


This is a really critical failing in the way we typical gather data.
We have huge survivorship bias.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 43

2017-10-24 Thread Chris Dent


# Welcome New TC Members

Main news to report about the OpenStack Technical Committee (TC) is that
the elections have finished and there are some new members. The three
incumbents that ran returned for another year, meaning three new
people join. There's more information in a [superuser
article](http://superuser.openstack.org/articles/openstack-tc-pike-elections/).
Welcome and congratulations to everyone.

After each election a new chair is selected. Any member of the TC may
be the chair, self-nomination is done by posting a review. The traditional
chair, Thierry, has posted [his
nomination](https://review.openstack.org/#/c/514553/1).

A [welcome
message](http://lists.openstack.org/pipermail/openstack-tc/2017-October/001477.html)
was posted to the TC mailing list with information and references for
how things work.

# TC Participation

At last Thursday's [office
hours](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-10-19.log.html#t2017-10-19T15:01:02)
Emilien asked, as a thought experiment, what people thought of the
idea of TC term limits. In typical office hours fashion, this quickly
went off into a variety of topics, some only tangentially related to
term limits.

To summarize, incompletely, the pro-reason is: Make room and
opportunities for new leadership. The con-reason is: Maintain a degree
of continuity.

This led to some discussion of the value of "history and baggage" and
whether such things are a keel or anchor in managing the nautical
metaphor of OpenStack. We did not agree, which is probably good
because somewhere in the middle is likely true.

Things then circled back to the nature of the TC: court of last resort
or something with a more active role in executive leadership. If the former,
who does the latter? Many questions related to significant change are
never resolved because it is not clear who does these things.

There's a camp that says "the people who step up to do it". In my experience
this is a statement made by people in a position of privilege and may
(intentionally or otherwise) exclude others or lead to results which have
unintended consequences.

This then led to meandering about the nature of facilitation.

(Like I said, a variety of topics.)

We did not resolve these questions except to confirm that the only way
to address these things is to engage with not just the discussion, but
also the work.

# OpenStack Technical Blog

Josh Harlow showed up with [an
idea](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-10-19.log.html#t2017-10-19T18:19:30).
An OpenStack equivalent of the [kubernetes
blog](http://blog.kubernetes.io/), focused on interesting technology
in OpenStack. This came up again on
[Friday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-10-20.log.html#t2017-10-20T18:13:01).

It's clear that anyone and everyone _could_ write their own blogs and
syndicate to the [OpenStack planet](http://planet.openstack.org/) but
this doesn't have the same panache and potential cadence as an
official thing _might_. It comes down to people having the time. Eking
out the time for this blog, for example, can be challenging.

Since this is the second [week in a
row](https://anticdent.org/tc-report-42.html) that Josh showed up with
an idea, I wonder what next week will bring?

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [placement] resource providers update 39

2017-10-20 Thread Chris Dent
do then your prize is a massive sense of accomplishment and
contribution.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 42

2017-10-17 Thread Chris Dent
in of the
Foundation. Examples include tools that enable NFV, edge computing,
CI/CD, containers. Stuff that _uses_ infrastructure. If this goes
through, one of the potential wins is that existing OpenStack may get
renewed focus and clarity of purpose.

# Reminder

If you can vote in the TC elections, please do.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [placement] resource providers update 38

2017-10-13 Thread Chris Dent


This is update 38. It is confused because last week's update said in
the subject that is was 36 but in the body that it was 37. It was 37.

# Most Important

Most important is simply making progress on the stuff that's
outstanding. There's enough in progress, with its related
dependencies, we pretty much just need to keep pushing things forward.

# What's Changed

During the scheduler team meeting it was decied that the root provider
should be represented when showing a nested resource provider resource.
Previously it was just going to be the parent (which could be used to
traverse to the root)

There are still a few specs pending that are interdependent with the
main themes so would be good to get reviewed:

* https://review.openstack.org/#/c/502306/
  Network bandwidth resource provider

* https://review.openstack.org/#/c/507052/
  Support traits in the Ironic driver

* https://review.openstack.org/#/c/508164/
  Add spec for symmetric GET and PUT of allocations
  (The POST /allocations spec depends on this one)

* https://review.openstack.org/#/c/509136/
  Fix issues for post-allocations spec

* https://review.openstack.org/#/c/504540/
  Spec for limiting GET /allocation_candidates

* https://review.openstack.org/#/c/510244/
  Granular Resource Request Syntax

* https://review.openstack.org/#/c/497733/
  Report CPU features to placement service by traits API

# Main Themes

## Nested Resource Providers

While working on nested resource providers it became clear there was
a lot of mixed up cruft in the resource provider objects, so before
the nested work there is now

https://review.openstack.org/#/q/status:open+topic:bp/de-orm-resource-providers

which is a stack of cleanups to how the SQL is managed in there. The
nested resource providers work is at:

https://review.openstack.org/#/q/branch:master+topic:bp/nested-resource-providers

This spec is important for making effective use of nested providers:

* https://review.openstack.org/#/c/510244/
  Granular Resource Request Syntax

And the work to make traits work is relevant here, because with traits
nested aren't near as useful:

https://review.openstack.org/#/q/bp/add-trait-support-in-allocation-candidates

## Migration allocations

The migration allocations work is happening at:

  https://review.openstack.org/#/q/topic:bp/migration-allocations

Management of those allocations currently involves some raciness,
birthing the idea to allow POST /allocations, some of the
code for that is in progress at

https://review.openstack.org/#/q/topic:bp/post-allocations

There are two outstanding bits of spec required for that:

* https://review.openstack.org/#/c/508164/
  Add spec for symmetric GET and PUT of allocations

* https://review.openstack.org/#/c/509136/
  Fix issues for post-allocations spec

## Alternate Hosts

We want to be able to do retries within cells, so we need some
alternate hosts when returning a destination that are structured
nicely for RPC:

https://review.openstack.org/#/q/topic:bp/return-alternate-hosts

Matt has recently pointed out that this stuff may cause some hiccups
with the CachingScheduler that we'll need to resolve in some fashion,
either by making it work, documentating that it doesn't work, or...?

# Other Stuff

* https://review.openstack.org/#/c/508149/
   A spec in neutron for QoS minimum bandwidth allocation in Placement
   API

* https://review.openstack.org/#/q/topic:bug/1702420
  Fixes for shared providers map being incorrect

* https://review.openstack.org/#/c/499826/
  Include /resource_providers/uuid/allocations link
  (Matt has declared this needs a microversion)

* https://review.openstack.org/#/c/492247/
  Use ksa adapter for placement conf & requests

* https://review.openstack.org/#/q/topic:bp/placement-osc-plugin
  Placement plugin for osc

* https://review.openstack.org/#/c/492571/
  Make compute log less verbose with allocs autocorrection

* https://review.openstack.org/#/c/499539/
  Stack of functional test fixups

* https://review.openstack.org/#/c/495380/
  [placement] manage cache headers for /resource_providers

* https://review.openstack.org/#/c/511488/
  [placement] Confirm that empty resources query causes 400

* https://review.openstack.org/#/c/511485/
  [placement] add coverage for update of standard resource class

There's almost certainly more. Please add to the list if I've left off
something important.

# End

Your prize this week is a personal invitation to the TC elections.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-sig/news

2017-10-12 Thread Chris Dent


Greetings OpenStack community,

All the usual attendees of the weekly API-SIG meeting are rather busy, so not a 
great deal of activity to report from a rather short meeting. Three things of 
note:

* The forum session for the SIG [5] has been approved, see you in Sydney.
* There's been some discussion of recording an outreach video at summit. Ed's 
going to follow up on that on the openstack-sigs list [6].
* The video discussion raised the issue that there might be a need, now that 
the group has a wider audience, for an APAC-friendly meeting time. Michael is 
going to follow up on that, also on the openstack-sigs list. If you have 
thoughts on that, feel free to beat him to it.

# Newly Published Guidelines

None

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

None this week

# Guidelines Currently Under Review [3]

* Updates for rename to SIG
  https://review.openstack.org/#/c/508242/

* A (shrinking) suite of several documents about doing version and service 
discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API SIG about APIs that you are 
developing or changing, please address your concerns in an email to the OpenStack 
developer mailing list[1] with the tag "[api]" in the subject. In your email, 
you should include any relevant reviews, links, and comments to help guide the discussion 
of the specific challenge you are facing.

To learn more about the API SIG mission and the work we do, see our wiki page 
[4] and guidelines [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] https://wiki.openstack.org/wiki/API_SIG
[5] http://forumtopics.openstack.org/cfp/details/52
[6] http://lists.openstack.org/pipermail/openstack-sigs/

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_sig/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 41

2017-10-10 Thread Chris Dent


( With brackets: https://anticdent.org/tc-report-41.html )

If there's a unifying theme in the mix of discussions that have
happended in `#openstack-tc` this past week, it is power: who has it,
what does it really mean, and how to exercise it.

This is probably because it is election season. After a slow start
there is now a [rather large number of
candidates](https://governance.openstack.org/election/#queens-tc-candidates),
a very diverse group. Is great to see.

On
[Wednesday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-10-04.log.html),
there were questions about the
[constellations](https://governance.openstack.org/tc/resolutions/20170404-vision-2019.html#navigating-with-constellations)
idea mooted in the TC Vision Statement and the extent to which the TC has power
to enforce integration between projects. Especially those which are considered
_core_ and those that are not (in this particular case Zaqar). The precedent
here is that the TC has minimal direct power in such cases (each project is
fairly autonomous), whereas individuals, some of whom happen to be on the TC,
do have power, by virtue of making specific changes in code. The role of the TC
in these kinds of situations is in making ideas and approaches visible (like
constellations) and drawing attention to needs (like the [top 5
list](https://governance.openstack.org/tc/reference/top-5-help-wanted.html)).

[Thursday's](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-10-05.log.html#t2017-10-05T15:00:01)
discussion provides an interesting counterpoint.  There some potential
candidates expressed concern about running because they were interested in
maintaining the good things that OpenStack has and had no specific agenda for
drastic or dramatic change while candidates often express what they'd like to
change. This desire for stability is probably a good fit, because in some ways
the main power of the TC is choosing which projects to let into the club and in
extreme cases kicking bad projects out. That latter is effectively the nuclear
option: since nobody wants to use it the autonomy of projects is enhanced.

Continuing the rolling segues: On the same day, ttx provided access
to [the answers to two
questions](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-10-05.log.html#t2017-10-05T15:46:43)
related to "developer satisfaction" that were added to the PTG survey.
These aren't the original questions, they were adjusted to be
considerably more open ended than the originals, which were
effectively yes or no questions. The questions:

* What is the most important thing we should do to improve the
  OpenStack software over the next year?
* What is one change that would make you happier as an OpenStack
  Upstream developer?

I intend to analyse and group these for themes when I have the time,
but just reading them en masse is interesting if you have some time. One
emerging theme is that some projects are perceived to have too much
power.

Which bring us to today's [office
hours](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-10-10.log.html#t2017-10-10T09:01:40)
where the power to say yes or no to a project was discussed again.

First up [Glare](https://review.openstack.org/#/c/479285/)
There are a few different (sometimes overlapping) camps:

* If we can't come up with reasons to _not_ include them, we should
  include them.
* If we can't come up with reasons to include then, we should _not_
  include them.
* If they are going to cause difficulties for Glance or the stability of
  the images API, that's a risk.
* If the Glare use case is abstract storage of stuff, and that's
  useful for everyone, why should Glare be an OpenStack project and
  not a more global or general open source project?

This needs to be resolved soon. It would be easier to figure out if
there was already a small and clear use case being addressed by Glare
with a clear audience.

Then [Mogan](https://review.openstack.org/#/c/508400/), a bare metal
compute service. There the camps are:

* The overlaps with Nova and Ironic, especially at the API-level
  are a significant _problem_.
* The overlaps with Nova and Ironic, especially at the API-level
  are a significant _opportunity_.

Straight [into the
log](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-10-10.log.html#t2017-10-10T09:55:08)
for more.

Finally, we landed on the topic of whether there's anything the TC can
do to help with the [extraction of
placement](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-10-10.log.html#t2017-10-10T10:06:18)
from nova.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstac

[openstack-dev] [nova] [placement] resource providers update 36

2017-10-06 Thread Chris Dent


Update 37 is struggling to contain itself, too much code associated
with placement!

# Most Important^wRecent

Discussion last night uncovered some disagreement and confusion over
the content of the Selection object that will be used to send
alternate destination hosts when doing builds. The extent to which an
allocation request is not always opaque and the need to be explicit
about microversions was clarified, so edleafe is going to make some
adjustments, after first resolving the prerequisite code (alternate
hosts: https://review.openstack.org/#/c/486215/ ).

# What's Changed

Nested providers spec merged, selection objects spec merged (but see
above and below), alternate hosts spec merge, request traits in nova
spec merged, minimal cache header spec merged, POST /allocations spec
merged.

* 
http://specs.openstack.org/openstack/nova-specs/specs/queens/approved/migration-allocations.html
* 
http://specs.openstack.org/openstack/nova-specs/specs/queens/approved/nested-resource-providers.html
* 
http://specs.openstack.org/openstack/nova-specs/specs/queens/approved/placement-cache-headers.html
* 
http://specs.openstack.org/openstack/nova-specs/specs/queens/approved/post-allocations.html
* 
http://specs.openstack.org/openstack/nova-specs/specs/queens/approved/request-traits-in-nova.html
* 
http://specs.openstack.org/openstack/nova-specs/specs/queens/approved/return-alternate-hosts.html
* 
http://specs.openstack.org/openstack/nova-specs/specs/queens/approved/return-selection-objects.html

There are additional specs not yet merged, placement related, some of
which the above depend on, that need some attention.

* https://review.openstack.org/#/c/502306/
  Network bandwidth resource provider

* https://review.openstack.org/#/c/507052/
  Support traits in the Ironic driver

* https://review.openstack.org/#/c/508164/
  Add spec for symmetric GET and PUT of allocations
  (The POST /allocations spec depends on this one)

* https://review.openstack.org/#/c/509136/
  Fix issues for post-allocations spec

* https://review.openstack.org/#/c/504540/
  Spec for limiting GET /allocation_candidates

* https://review.openstack.org/#/c/497713/
  Add trait support in the allocation candidates API

# Main Themes

## Nested Resource Providers

While working on nested resource providers it became clear there was
a lot of mixed up crusted in the resource provider objects, so before
the nested work there is now

https://review.openstack.org/#/q/status:open+topic:no-orm-resource-providers

which is a stack of cleanups to how the SQL is managed in there. When
that is done, the conflicts at

https://review.openstack.org/#/q/status:open+topic:no-orm-resource-providers

will be resolved and nested work will continue.

## Migration allocations

The migration allocations work is happening at:

 https://review.openstack.org/#/q/topic:bp/migration-allocations

Management of those allocations currently involves some raciness,
birthing the specs (above) to allow POST /allocations, some of the
code for that is in progress at

https://review.openstack.org/#/q/topic:bp/post-allocations

## Alternate Hosts

We want to be able to do retries within cells, so we need some
alternate hosts when returning a destination that are structured
nicely for RPC:

https://review.openstack.org/#/q/topic:bp/return-selection-objects

# Other Stuff

* https://review.openstack.org/#/c/508149/
  A spec in neutron for QoS minimum bandwidth allocation in Placement
  API

There's plenty of other other stuff too, but much of it is covered in
the links above to avoid a tyranny of choice, I'll just leave it off
for now. There's plenty of existing stuff to think about.

# End

Your prize this week is vegetable tempura.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 40

2017-10-03 Thread Chris Dent


( rendered: https://anticdent.org/tc-report-40.html )

This week opens OpenStack Technical Committee (TC) election season.
There's an [announcement email
thread](http://lists.openstack.org/pipermail/openstack-dev/2017-October/122933.html)
(note the followup with some corrections). Individuals in the
OpenStack community may self-nominate up until 2017-10-08, 23:45 UTC.
There are instructions for [how to submit your
candidacy](https://governance.openstack.org/election/#how-to-submit-your-candidacy).

If you are interested you should put yourself forward to run. The TC
is better when it has a mixture of voices and experiences. The
absolute time commitment is less than you probably think (you can
make it much more if you like) and no one is expected to be a world
leading expert in coding and deploying OpenStack. The required
experience is being engaged in, with, and by the OpenStack community.

Election season inevitably leads to questions of:

* what the TC _is designed_ to do
* what the TC _should_ do
* what the TC _actually_ did lately

A year ago Thierry published [What is the Role of the OpenStack
Technical Committee](https://ttx.re/role-of-the-tc.html):


Part of the reason why there are so many misconceptions about the
role of the TC is that its name is pretty misleading. The Technical
Committee is not primarily technical: most of the issues that the TC
tackles are open source project governance issues.


Then this year he wrote [Report on TC activity for the May-Oct 2017
membership](http://lists.openstack.org/pipermail/openstack-dev/2017-October/122962.html).

Combined, these go some distance to answering the design and actuality
questions.

The "should" question can be answered by the people who are able and
choose to run for the TC. Throughout the years people have taken
different approaches, some considering the TC a sort of reactive
judiciary that mediates and adjudicates disagreements while others
take the view that the TC should have a more active and executive
leadership role.

Some of this came up in [today's office
hours](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-10-03.log.html#t2017-10-03T09:01:27)
where I reported participating in a few conversations with people who
felt the TC was not relevant, so why run? The ensuing conversation may
be of interest if you're curious about the intersection of economics,
group dynamics, individualism versus consensualism in collaborative
environments, perception versus reality, and the need for leadership
and hard work.

# Other Topics

Conversations on Wednesday and Thursday of last week hit a couple of other
topics.

## LTS

On Wednesday the topic of [Long Term
Support](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-09-27.log.html#t2017-09-27T17:15:24)
came up again. There are effectively two camps:

* Those who wonder why this should be an upstream problem at all, as
  long as we are testing upgrades from N-1 we're doing what needs to
  be done.

* Those who think that if multiple companies are going to be working
  on LTS solutions anyway, wouldn't it be great to not duplicate
  effort?

And we hear reports of organization that want LTS to exist, but are
not willing to dedicate resources to see it happen, evidently still
confusing large-scale open source with "yay! I get free stuff!".

## Overlapping Projects

On Thursday we discussed some of the mechanics and challenges when
dealing with [overlapping
projects](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-09-28.log.html#t2017-09-28T15:01:35)
in the form of Trove and a potential new database-related project with
the working title of "Hoard". Amongst other things there's discussion of
properly using the [service types 
authority](https://service-types.openstack.org/)
and effectively naming resources when there may be another thing that
wants to use a similar name for not quite the same purpose.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement/resource providers update 36

2017-09-29 Thread Chris Dent


Update 36, accelerating into the cycle, is thinking about specs.

# Most Important

There are several specs outstanding for the main placement-related
work that is prioritized for this cycle. And some of those specs have
spin off specs inspired by them. Since a spec sprint is planned for
early next week, I'll break with tradition and format things
differently this time to put some emphasis on specs to be clear that
we need to get those out of the way.

The three main priorities are migration uuid for allocations,
alternate hosts, and nested providers.

## Nested Resource Proivders

The nested resource providers spec is at

https://review.openstack.org/#/c/505209/

It was previously accepted, but with all the recent talk about dealing
with traits on nested providers there's some discussion happening
there. There's a passel of related specs, about implementing traits in
various ways:

* https://review.openstack.org/#/c/497713/
  Add trait support in the allocation candidates API

* https://review.openstack.org/#/c/468797/
  Request traits in Nova

John has started a spec about using traits with Ironic:

https://review.openstack.org/#/c/507052/

The NRP implementation is at:

https://review.openstack.org/#/q/topic:bp/nested-resource-providers

## Migration allocations

The migration allocations spec has already merged

https://review.openstack.org/#/c/498510/

and the work for it is ongoing at:

https://review.openstack.org/#/q/topic:bp/migration-allocations

Management of those allocations currently involves some raciness,
plans to address that are captured in:

* https://review.openstack.org/#/c/499259/
  Add a spec for POST /allocations in placement

but that proposes a change in the allocation representation which
ought to first be reflected in PUT /allocations/{consumer_uuid},
that's at:

* https://review.openstack.org/#/c/508164/
  Add spec for symmetric GET and PUT of allocations

## Alternate Hosts

We want to be able to do retries within cells, so we need some
alternate hosts when returning a destination, the spec for that
is:

https://review.openstack.org/#/c/504275/k

We want that data to be formatted in a way that causes neither fear
nor despair, so a spec for "Selection" objects exists:

https://review.openstack.org/#/c/498830/

Implementation ongoing at:


https://review.openstack.org/#/q/topic:bp/placement-allocation-requests+status:open

## Other Specs

* https://review.openstack.org/#/c/496853/
  Add a spec for minimal cache headers in placement

* https://review.openstack.org/#/c/504540/
  Spec for limiting GET /allocation_candidates
  (This one needs some discussion about what the priorities are, lots
  of good but different ideas on the spec)

* https://review.openstack.org/#/c/502306/
  Network bandwitdh resource provider

# End

Next time we'll go back to the usual format.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Running large instances with CPU pinning and OOM

2017-09-28 Thread Chris Dent

On Thu, 28 Sep 2017, Premysl Kouril wrote:


Only the memory mapped for the guest is striclty allocated from the
NUMA node selected. The QEMU overhead should float on the host NUMA
nodes. So it seems that the "reserved_host_memory_mb" is enough.



Even if that would be true and overhead memory could float in NUMA
nodes it generally doesn't prevent us from running into OOM troubles.
No matter where (in which NUMA node) the overhead memory gets
allocated, it is not included in available memory calculation for that
NUMA node when provisioning new instance and thus can cause OOM (once
the guest operating system  of the newly provisioned instance actually
starts allocating memory which can only be allocated from its assigned
NUMA node).


Some of the discussion on this bug may be relevant:

https://bugs.launchpad.net/nova/+bug/1683858

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 39

2017-09-26 Thread Chris Dent


(If you prefer, there's html: <https://anticdent.org/tc-report-39.html>)

It has been a while since the last one of these that [had any
substance](https://anticdent.org/tc-report-33.html). The run up to the
[PTG](https://www.openstack.org/ptg) and travel to and fro meant
either that not much was happening or I didn't have time to write.
This week I'll attempt to catch up with TC activities (that I'm aware
of) from the PTG and this past week.

# Board Meeting

The Sunday before the PTG there was an all day meeting of the
Foundation Board, the Technical Committee, the User Committee and
members of the Interop and Product working groups. The
[agenda](https://wiki.openstack.org/wiki/Governance/Foundation/10Sep2017BoardMeeting)
was oriented towards updates on the current strategic focus
areas:

* Better communicate about OpenStack
* Community Health
* Requirements: Close the feedback loop
* Increase complementarity with adjacent technologies
* Simplify OpenStack

Each group gave an overview of the progress they've made since
[Boston](/openstack-pike-board-meeting-notes.html). [Mark
McLoughlin](https://crustyblaa.com/september-10-2017-openstack-foundation-board-meeting.html)
has a good overview of most of the topics covered.

I was on the hook to discuss what might be missing from the strategic
areas. In the "Community Health" section we often discuss making the
community inviting to new people, especially to under-represented
groups and making sure the community is capable of creating new
leaders. Both of these are very important (especially the first) but
what I felt was missing was attention to the experience of the regular
contributor to OpenStack who has been around for a while. A topic we
might call "developer happiness". There are a lot of dimensions to
that happiness, not all of which OpenStack is great at balancing.

It turns out that this was already a topic within the domain of
Community Health but had been set aside while progress was being made
on other topics. So now I've been drafted to be a member of that
group. I will start writing about it soon.

# PTG

The PTG was five days long, I intend to write a separate update about
the days in the API and Nova rooms, what follows are notes from the
TC-related sessions that I was able to attend.

As is the norm, there was an
[etherpad](https://etherpad.openstack.org/p/queens-PTG-TC-SWG) for the
whole week, which for at least some things has relatively good notes.
There's too much to report all that happened, so here are some
interesting highlights:

* To encourage community diversity and accept the reality of
  less-than-full time contributors it will become necessary to have
  more cores, even if they don't know everything there is to know
  about a project.
* Before the next TC election (coming soon: nominations start 29
  September) a report will be made on the progress made by the TC in
  the last 12 months, especially with regard to the goals expressed in
  the [vision
  
statement](https://governance.openstack.org/tc/resolutions/20170404-vision-2019.html).
  We should have been doing this all along, but is perhaps an
  especially good idea now that [regular meetings have
  
stopped](https://governance.openstack.org/tc/resolutions/20170425-drop-tc-weekly-meetings.html).
* The TC will take greater action to make sure that strategic
  priorities (in the sense of "these are some of the things the TC
  observes that OpenStack should care about") are effectively
  publicised. These are themes that fit neither in the urgency of the
  [Top 5
  list](https://governance.openstack.org/tc/reference/top-5-help-wanted.html)
  nor in the concreteness of [OpenStack-wide
  Goals](https://governance.openstack.org/tc/goals/index.html). One
  idea is to prepare a short list before each PTG to set the tone.
  Work remains to flesh this one out.

# The Past Week

The week after the PTG it's hard to get rolling, so there's not a
great deal to report from office hours or otherwise. The busiest day
in `#openstack-tc` was
[Thursday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-09-21.log.html)
where the discussion was mostly about Glare's application to [be
official](https://review.openstack.org/#/c/479285/). This has raised a
lot of questions, many of which are in the IRC log or on the review.
As is often the case with contentious project applications, the
questions frequently reflect (as they should) the biases and goals the
reviewers have for OpenStack as a whole. For example I asked "Why
should Glare be an _OpenStack_ project rather than a more global
project (that happens to have support for keystone)?" while others
expressed concern for any overlap (or perception thereof) between
Glance and Glare and still others said the equivalent of "come on,
enough with this, let's just get on with it, there's enough work to go
around."

And with that I must end this for this wee

Re: [openstack-dev] Garbage patches for simple typo fixes

2017-09-25 Thread Chris Dent

On Fri, 22 Sep 2017, Paul Belanger wrote:


This is not a good example of encouraging anybody to contribute to the project.


Yes. This entire thread was a bit disturbing to read. Yes, I totally
agree that mass patches that do very little are a big cost to
reviewer and CI time but a lot of the responses sound like: "go away
you people who don't understand our special culture and our
important work".

That's not a good look.

Matt's original comment is good in and of itself: I saw a thing,
let's remember to curtail this stuff and do it in a nice way.

But then we generate a long thread about it. It's odd to me that
these threads sometimes draw more people out then discussions about
actually improving the projects.

It's also odd that if OpenStack were small and differently
structured, any self-respecting maintainer would be happy to see
a few typo fixes and generic cleanups. Anything to push the quality
forward is nice. But because of the way we do review and because of
the way we do CI these things are seen as expensive distractions[1].
We're old and entrenched enough now that our tooling enforces our
culture and our culture enforces our tooling.

[1] Note that I'm not denying they are expensive distractions nor
that they need to be managed as such. They are, but a lot of that
is on us.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement/resource providers update 35

2017-09-22 Thread Chris Dent
ck.org/#/c/493865/
functional tests for live migrate

* https://review.openstack.org/#/c/494136/
Allow shuffling of best weighted hosts

* https://review.openstack.org/#/c/495159/
tests for resource allocation during soft delete

* https://review.openstack.org/#/c/485209/
gabbi tests for shared custom resource class

* https://review.openstack.org/#/c/496853/
Spec for minimal cache-headers in placement
poc: https://review.openstack.org/#/c/495380/

* https://review.openstack.org/#/c/489633/
Update RT aggregate map less frequently

* https://review.openstack.org/#/c/494206/
Remove the Pike migration code for flavor migration

* https://review.openstack.org/#/c/468797/
Spec for requesting traits in flavors

* https://review.openstack.org/#/c/428481/
Request zero root disk for boot-from-volume instances
(Relevant for making sure that disk allocations are correct.)

* https://review.openstack.org/#/c/452006/
Add functional test for two-cell scheduler behaviors

* https://review.openstack.org/#/c/497399/
Extend ServerMovingTests with custom resources

* https://review.openstack.org/#/c/497733/
WIP spec Report CPU features to placement service by traits API

* https://review.openstack.org/#/c/496847/
Add missing tests for _remove_deleted_instances_allocations

* https://review.openstack.org/#/c/492247/
Use ksa adapter for placement conf & requests

* https://review.openstack.org/#/c/492571/
Make compute log less verbose with allocs autocorrection

* https://review.openstack.org/#/c/499826/
   Include /resource_providers/uuid/allocations link
   (controversy around this one!)

* https://review.openstack.org/#/q/topic:bug/1718455
  Fix moving a single instance when it was created as part of a
  multiple

(There's probably more, but I reckon that's plenty. Some of the things
on this list have been around a long time. Let's flush them through or
reject them please.)

# End

There's been some discussion about avoiding placement-related topics
at the forum, to give other issues, notably feedback from ops and
users a greater opportunity. I think this is a great idea, we've
already got plenty queued up.

This week's prize for reading this far is a less encumbered Forum
schedule.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] Modeling SR-IOV with nested resource providers

2017-09-13 Thread Chris Dent

On Wed, 13 Sep 2017, Jay Pipes wrote:

We still need a way to represent a request to placement to find allocation 
candidates for like resources, though. As you pointed out, I've thought about 
using multiple requests to placement from the conductor or scheduler. We 
could also do something like this:


GET 
/allocation_candidates?resources=VCPU:1,MEMORY_MB:1024=SRIOV_NET_VF:1=CUSTOM_PHYSNET_A,CUSTOM_SWITCH_1=SRIOV_NET_VF:1=CUSTOM_PHYSNET_A,CUSTOM_SWITCH_2


To clarify, this translates to:

* give me one compute node with 1 VCPU and 1024 MEMORY_MB that has
* 2 vf
  * both on physnet A
  * one on switch 1
  * one on switch 2


--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg][nova][neutron] modelling network capabilities and capacity in placement and nova neutron port binding negociation.

2017-09-12 Thread Chris Dent

On Mon, 11 Sep 2017, Eric Fried wrote:


Folks-

I tentatively snagged Tuesday 13:30-14:30 in the compute/vm/bm room for
this, and started an etherpad [1].

o Please add your nick to the interest list if you want to be pinged for
updates (e.g. in case we move rooms/times).  (Miguel, what's your IRC nick?)
o Feel free to flesh out the schedule/scope/topics.
o Let me know if this time/location doesn't work for you.
o It would be nice to have a rep from Neutron handy :)

[1] https://etherpad.openstack.org/p/placement-nova-neutron-queens-ptg


I've added links to pics from yesterday's spontaneous whiteboarding
to this etherpad.

What (if anything) is the difference between this session and
etherpad and the one that Sean has created a bit later in thread.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement/resource providers update 34

2017-09-01 Thread Chris Dent
stack.org/#/c/495380/

* https://review.openstack.org/#/c/469048/
   Update the placement deployment instructions
   This has been around for nearly 4 months.

* https://review.openstack.org/#/c/489633/
   Update RT aggregate map less frequently

* https://review.openstack.org/#/c/494206/
   Remove the Pike migration code for flavor migration

* https://review.openstack.org/#/c/468797/
   Spec for requesting traits in flavors

* https://review.openstack.org/#/c/428481/
   Request zero root disk for boot-from-volume instances
   (Relevant for making sure that disk allocations are correct.)

* https://review.openstack.org/#/c/452006/
   Add functional test for two-cell scheduler behaviors

* https://review.openstack.org/#/c/496202/
   Add functional migrate force_complete test

* https://review.openstack.org/#/c/497399/
   Extend ServerMovingTests with custom resources

* https://review.openstack.org/#/c/497733/
   WIP spec Report CPU features to placement service by traits API

* https://review.openstack.org/#/c/496976/
   Centralize allocation deletion in ComputeManager

* https://review.openstack.org/#/c/496803/
   Add missing unit tests for FilterScheduler._get_all_host_states

* https://review.openstack.org/#/c/496847/
   Add missing tests for _remove_deleted_instances_allocations

* https://review.openstack.org/#/c/492247/
   Use ksa adapter for placement conf & requests

* https://review.openstack.org/#/c/492571/
   Make compute log less verbose with allocs autocorrection

* https://review.openstack.org/#/c/499678/
  The start of a stack of fixes for allocations not being created when
  forced evacuation host

* https://review.openstack.org/#/c/499615/
  [placement] Add test for empty resources in allocation

* https://review.openstack.org/#/c/498627/
  Add functional recreate test for live migration pre-check fails

* https://review.openstack.org/#/c/499826/
  Include /resource_providers/uuid/allocations link
  (controversy around this one!)

* https://review.openstack.org/#/c/499682/
  [placement] api-ref GET /traits name:startswith

* https://review.openstack.org/#/c/497978/
  WIP: SPEC: Treat devices as generic resources

* https://review.openstack.org/#/c/499583/
  Add functional for live migrate delete

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][glance] What should be the response for invalid 'Accpet' header?

2017-09-01 Thread Chris Dent

On Fri, 1 Sep 2017, Sean McGinnis wrote:


Thanks Chris, I wasn't aware of that. What's your opinion - change
it to be more strict, or leave it as is?


Ideally we'd have true content negotiation and support multiple
representations with different content types and diferent versions.

;)/2

But since we don't I think existing services can probably forgo
making any changes unless they are eager to tighten things up.

For some additional context:

Placement vaguely supports content negotiation but not in any
significant way. The docstring of a check_accept decorator says:
"If accept is set explicitly, try to follow it. If there is no match
for the incoming accept header send a 406 response code.  If accept
is not set send our usual content-type in response."

Because of the way placement uses webob, and the way webob uses the
accept header to control the formatting of error responses then
content negotiation comes into play on error responses: They are
text/html unless there is an accept header of application/json.


--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][glance] What should be the response for invalid 'Accpet' header?

2017-09-01 Thread Chris Dent

On Thu, 31 Aug 2017, Singh, Niraj wrote:


As of now, when user passes 'Accept' header in request other than JSON and XML 
using curl command then it returns 200 OK response with json format data.
In api-ref guide [1] also it's not clearly mentioned about what response it 
should return if invalid value for 'Accept' header is specified. IMO instead of 
'HTTP 200 OK' it should return 'HTTP 406 Not Acceptable' response.


I posted this on the bug you created (thanks for that) too but will
paste it here too for completeness:

I generally agree that this is bad behavior and it would be nice if
406 were the response.

However, this isn't violating the HTTP 1.1 RFCs.
https://tools.ietf.org/html/rfc7231#section-5.3.2 says:

"If the header field is present in a request and none of the
available representations for the response have a media type that is
listed as acceptable, the origin server can either honor the header
field by sending a 406 (Not Acceptable) response or disregard the
header field by treating the response as if it is not subject to
content negotiation."

As far as I'm aware very very few (if any) openstack services do
content negotiation. They only return JSON. Given that, it is
acceptable (ha!) for the header to be disregarded if that's what
people choose.


--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][puppet][openstack-ansible] Better way to run wsgi service.

2017-08-25 Thread Chris Dent

On Thu, 24 Aug 2017, Jeffrey Zhang wrote:


i propose use uwsgi native http mode[1]. Then one uwsgi
container is enough to run nova-api service. Base one the official
doc, if there is no static resource, uWSGI is recommended to use
as a real http server.


I would consider using uwsgi in uwsgi mode, to run the api server in
a container independently. Using uwsgi mode preserves the capability
of your container (or containers) being anywhere, but is more
performant than http mode.

The relatively new uwsgi stuff on devstack has relatively clean ways
to run uwsgi services under systemd, but does so with unix domain
sockets. This might be a useful guide for writing uwsgi.ini files
for single processes in containers. Some adaptations will be
required.

And then reverse proxy to the container from either nginx or apache2
such that every service is on the same
https://api.mycloud.example.com/ host with the service on a path
prefix: https://api.mycloud.example.com/compute/,
https://api.mycould.example.com/block-storage, etc


--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement/resource providers update 33

2017-08-25 Thread Chris Dent
 RP maps to those RPs that share with it
  This is a requirement for getting shared providers working
  correctly.

* https://review.openstack.org/#/c/496853/
  Spec for minimal cache-headers in placement
  poc: https://review.openstack.org/#/c/495380/

* https://review.openstack.org/#/c/469048/
  Update the placement deployment instructions
  This has been around for nearly 4 months.

* https://review.openstack.org/#/c/489633/
  Update RT aggregate map less frequently

* https://review.openstack.org/#/c/494206/
  Remove the Pike migration code for flavor migration

* https://review.openstack.org/#/c/468797/
  Spec for requesting traits in flavors

* https://review.openstack.org/#/c/496933/
  Add uuid to migration table
  (This is relevant to placement and scheduling because it ought to
  make the "doubling" currently used for doing moves cleaner (by
  having two different allocations: one identified by the migration
  uuid. Aren't uuids awesome?)

* https://review.openstack.org/#/c/428481/
  Request zero root disk for boot-from-volume instances
  (Relevant for making sure that disk allocations are correct.)

* https://review.openstack.org/#/c/452006/
  Add functional test for two-cell scheduler behaviors

* https://review.openstack.org/#/c/496202/
  Add functional migrate force_complete test

* https://review.openstack.org/#/c/497399/
  WIP: Test server movings with custom resources

* https://review.openstack.org/#/c/497733/
  WIP spec Report CPU features to placement service by traits API

* https://review.openstack.org/#/c/496976/
  Centralize allocation deletion in ComputeManager

* https://review.openstack.org/#/c/496803/
  Add missing unit tests for FilterScheduler._get_all_host_states

* https://review.openstack.org/#/c/496847/
  Add missing tests for _remove_deleted_instances_allocations

* https://review.openstack.org/#/c/492247/
  Use ksa adapter for placement conf & requests

* https://review.openstack.org/#/c/492571/
  Make compute log less verbose with allocs autocorrection

* https://review.openstack.org/#/c/496936/
  De-duplicate two delete_allocation_for_* methods

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-sig/news

2017-08-24 Thread Chris Dent


Greetings OpenStack community,

Two main topics this week: Getting ready for the PTG and speculating about 
standards for representing singular resources in an HTTP response.

The API-SIG will have a room on Monday and Tuesday at the PTG. In addition to 
the guided review process [6] we've mentioned before, there's an etherpad [5] 
where an agenda of potential topics is being built. We'll likely be sharing 
some of the time with people interested in SDKs and other issues related to 
consuming APIs. If there are gaps we will strenuously but politely argue about 
strong types in HTTP APIs and the subtle nature of microversions.

The discussion about singular resources relates to a placeholder bug [7] for a 
guideline that we need to write. There are at least two different styles in use 
within OpenStack already, neither of which align with some emerging standards. 
The bug links to some of the discussion. Please feel free to provide your 
opinion if you have one.

# Newly Published Guidelines

None this week.

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

None this week

# Guidelines Currently Under Review [3]

* Explain, simply, why extensions are bad
  https://review.openstack.org/#/c/491611/

* A (shrinking) suite of several documents about doing version and service 
discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API WG, please address your concerns in 
an email to the OpenStack developer mailing list[1] with the tag "[api]" in the 
subject. In your email, you should include any relevant reviews, links, and comments to 
help guide the discussion of the specific challenge you are facing.

To learn more about the API WG mission and the work we do, see OpenStack API 
Working Group [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[5] https://etherpad.openstack.org/p/api-ptg-queens
[6] http://specs.openstack.org/openstack/api-wg/guidedreview.html
[7] https://bugs.launchpad.net/openstack-api-wg/+bug/1593310

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_sig/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] reducing code complexity as a top-5 goal

2017-08-24 Thread Chris Dent


In the rather long history of OpenStack there have been plenty of
discussions about how to make sure that architectural and code
complexity and various forms technical debt get the attention they
deserve. Lots of different approaches from different angles; some
more successful than others. One recent not-so-successful effort was
the architecture working group: it never got enough traction to make
any real change.

In TC office hours earlier this week

   
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-08-22.log.html#t2017-08-22T09:02:42

I introduced an idea for a different approach: attempt to reduce
some measure of complexity by prioritizing simple rules of thumb
about code complexity: "things like extracing methods, keeping
methods short, avoiding side effects, keeping modules short".

To make this discussion more concrete I've proposed that we "Add
Reduce Development Complexity" to the top-5 help wanted list:

https://review.openstack.org/#/c/496404/

(The top-5 list

https://governance.openstack.org/tc/reference/top-5-help-wanted.html

is a way to socialize and popularize areas where additional
contribution and attention is needed.)

This idea could just as easily be a community goal, or something we
take on simply because we are motivated folk. I'm simply using the
review as a way to get the conversation started and bound it a bit
so we don't fall in a hole. It may be that this goes nowhere, but
it's worth a try.

Different people have very different ideas on what complexity in
code means, and the relative important of maintainability and
readability. In the document under review I state some of the
reasons why these things might be important for than just personal
preference reasons. It also includes some complexity stats[1] for some
of the larger and older projects.

If you have thoughts on this idea, please comment here or on the
review, especially if you have some ideas on how to make issues like
this have traction.

Thanks.

[1] Note that such metrics are not substitute for human evaluation
of code. They simply provide a straightforward way of finding some
sore thumbs. We should neither trust nor rely on these metrics to
assert much of anything.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] [api] cache headers in placement service

2017-08-23 Thread Chris Dent

On Mon, 21 Aug 2017, Chris Dent wrote:


Essentially so we can put last-modified headers on things, which in
RFC speak we SHOULD do. And if we do that then we SHOULD make sure
no caching happens.


For sake of completeness, I've gone ahead and proposed a spec for
this:

https://review.openstack.org/#/c/496853/

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] microversion-parse growing, needs guidance

2017-08-22 Thread Chris Dent


In https://review.openstack.org/#/c/496212 I've started the process
of moving some of the generally useful microversion-related
functions out of the placement service and into the microversion-parse
library. This will help make the library more useful (see the
README.rst in the review for examples).

I have, however, thus far left out some of the serious microversion meat
that is still in use in placement:

* decorator for handling mulitple callables of the same name based
  on microversion
* utility method to raise a status code response based microversion
  match (things like: respond with a 404 if the microversion is less
  than (1, 1))
* middleware that extracts the microversion from headers (using
  microversion-parse), sticks it in the environment, and sends
  microversion headers on every response

The reason these are not included is because all of them use webob,
primarily in two different ways:

* raising webob exceptions within a webob.dec.wsgify context,
  causing well formed error responses
* using a json formatter to structure error responses according to
  api guidelines

I'm hesitant to add these because they would introduce a dependency
on webob and enforce some behaviors in the projects that use them.
The options seem to be:

* Don't worry, nobody but nova and placement is using
  microversion-parse and nobody else has plans to do so.
* Don't worry about it, everyone loves webob, include it.
* I _think_ I can keep the use of webob contained in a way that makes
  sure it doesn't impact the rest of the WSGI stack (less sure
  about the error formatter).
* With some more effort I could do raw WSGI handling, leaving out
  the use of webob stuff.

I'm kind of inclined towards the last option, but I'm not sure it is
worth it if there aren't any interested parties.

To be clear, if I move the missing parts mentioned above into
microversion-parse, what it could then become is a generic
microversion handling library, including WSGI middleware, satisfying
most common microversion needs. If I don't move them, it's still
useful, but projects would still need to write their own middleware.
Given the diversity of stacks we have in use, that might be how it
would have to work anyway.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 34

2017-08-22 Thread Chris Dent


Once again, a slow week in the world of TC-related activity. Plenty
of people are on holiday or still engaged with release-related
activity.

So really the only thing to talk about is that everyone who will be
attending the [PTG](https://www.openstack.org/ptg) or who has concerns
they wish to see addressed there should be aware of the [canonical
list of PTG
etherpads](https://wiki.openstack.org/wiki/PTG/Queens/Etherpads).
Those are used by the various projects, working groups, SIGs, and
committees to build agendas and prepare. If you have concerns, add
them. The TC will be sharing a room on Monday and Tuesday with the
Stewardship Working Group. There may be topics of interest on [that
etherpad](https://etherpad.openstack.org/p/queens-PTG-TC-SWG).

There's also a handy set of [PTG quick
links](http://ptg.openstack.org/).

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] [api] cache headers in placement service

2017-08-21 Thread Chris Dent

On Mon, 21 Aug 2017, Jay Pipes wrote:

On 08/21/2017 04:59 AM, Chris Dent wrote:
We do have cache validation on the server side for resource classes. Any time 
a resource class is added or deleted, we call _RC_CACHE.clear(). Couldn't we 
add a single attribute to the ResourceClassCache that returns the last time 
the cache was reset?


That's server side cache, of which the client side (or proxy side)
has no visibility. If we had etags, and were caching etag to resource
pairs when we sent out responses, we could then have a conditional
GET handler which checked etags, returning 304 on a cache hit.
At _RC_CACHE changes we could flush the etag cache.

But meh, you're right that the simpler solution is just to not do HTTP 
caching.


'xactly

But then again, if the default behaviour of HTTP is to never cache anything 
unless some cache-related headers are present [1] and you *don't* want 
proxies to cache any placement API information, why are we changing anything 
at all anyway? If we left it alone (and continue not sending Cache-Control 
headers for anything), the same exact result would be achieved, no?


Essentially so we can put last-modified headers on things, which in
RFC speak we SHOULD do. And if we do that then we SHOULD make sure
no caching happens.

Also it seems like last-modified headers is a nice-to-have for that
"uknown client" I spoke up in the first message.

But as you correctly identify the immediate practical value to nova
is pretty small, which is one of the reasons I was looking for the
lightest-weight implementation.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [placement] [api] cache headers in placement service

2017-08-21 Thread Chris Dent

On Sun, 20 Aug 2017, Jay Pipes wrote:

On 08/18/2017 01:23 PM, Chris Dent wrote:

So my change above adds 'last-modified' and 'cache-control:
no-cache' to GET of /resource_providers and
/resource_providers/{uuid} and proposes we do it for everything
else.

Should we?


No. :) Not everything. In particular, I think both the GET /resource_classes 
and GET /traits URIs are very cacheable and we shouldn't disallow proxies 
from caching that content if they want to.


Except that unless we have cache validation handling on the server
side, which we don't, then the "very cacheable" dependent on use
setting a max-age and coming to agreement over what the right
max-age seems unlikely. The simpler solution is to not cache.


If we do, some things to think about:

* The related OVO will need the updated_at and created_at
   fields exposed. This is pretty easy to do with the
   NovaTimestampObject mixin. This doesn't need to require a object
   version bump because we don't do RPC with them.


Technically, only the updated_at field would need to be exposed via the OVO 
objects. But OK, sure. I'd even advocate a patch to OVO that would bring in 
the NovaTimestampObject mixin. Just call it Timestamped or something...


The way the database tables are currently set up, when a entity is
first created, created_at is set, and updated_at is null. Therefore,
on new entities, failing over to created_at when updated_at is null
is necessary.

The work I've done thus far has tried to have the smallest impact on
the database tables and the queries used to get at them. They're
already complex enough.

The entity tables already have created_at and updated_at columns.
Exposing those columns on the objects is a matter of adding the
mixin.

I agree that making a change on OVO to have a Timestamped would be
useful.


* The current implementation of getting the last modified time for a
   collection of resources is intentionally naive and decoupled from
   other stuff. For very large result sets[3] this might be annoying,
   but since we are already doing plenty of traversing of long lists,
   it may not be a big deal. If it is we can incorporate getting the
   last modified time in the loop that serializes objects to JSON
   output.


I'm not sure what you're referring to above as "intentionally naive and 
decoupled from other stuff"? Adding the updated_at field of the underlying DB 
tables would be trivial -- maybe 10-15 lines total for DB/object layer and 
REST API as well. Am I missing something?


By "other stuff" I mean two things:

* the code is nova/objects/resource_provider.py
* the serialization (to JSON) code in placement/handlers/*.py

For those requests that return collections, we _could_ adapt the
queries used to retrieve those resources to find us the max
updated_at time during the query.

Or we could also do the same while traversing the list of objects to
create the JSON output.

I've chosen not to do the DB/object side changes because that is a
maze of many twisting passages, composed in fun ways. For those
situations where a list of native (e.g. /resource_providers) objects
it return it is simply easier to extract the info later in the
process. For those situations there the returned data is composed on
the fly (e.g. /allocation_candidates, /usages) we want the
last-modified to be now() anyway, so it doesn't matter.

So the concern/question is around whether people deem it a problem
to traverse the list of objects a second time after already
traversing them a first time to create the JSON output. If so, we
can make the serialization loop have two purposes.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [placement] [api] cache headers in placement service

2017-08-18 Thread Chris Dent


(I put [api] in the subject tags because this might be of interest
to a wider audience that cares about APIs.)

Long ago and far away I made this bug:

https://bugs.launchpad.net/nova/+bug/1632852

"placement api responses should not be cacehable"

Today I've pushed up a WIP that demonstrates a way to address this:

https://review.openstack.org/#/c/495380/

Before I get too far along with it, I'd like us to decide whether we
think it is worth doing and consider some of the questions it will
raise.

I think it is worth doing not just because it would be correct but
because without it, we cannot be assured that proxies or user agents
will not cache resource providers and other entities, and that would
lead to bizarre results.

At the same time, any entity you put on the web, according to the
RFCs[1], should have a last-modified header if it "can be reasonably
and consistently determined".

So my change above adds 'last-modified' and 'cache-control:
no-cache' to GET of /resource_providers and
/resource_providers/{uuid} and proposes we do it for everything
else.

Should we?

If we do, some things to think about:

* The related OVO will need the updated_at and created_at
  fields exposed. This is pretty easy to do with the
  NovaTimestampObject mixin. This doesn't need to require a object
  version bump because we don't do RPC with them.

* Adding a response header violates interop guidelines, so this
  would mean a microversion bump that impacts all GET requests. In
  systems that currently use placement (the report client in nova,
  mostly) no attention is being paid to either of the headers being
  added, so there would be no need for motion on that side.

* The current implementation of getting the last modified time for a
  collection of resources is intentionally naive and decoupled from
  other stuff. For very large result sets[3] this might be annoying,
  but since we are already doing plenty of traversing of long lists,
  it may not be a big deal. If it is we can incorporate getting the
  last modified time in the loop that serializes objects to JSON
  output.

I think we should. Generally speaking I think it is good form to
fulfil the expectations of HTTP. It helps make sure the HTTP APIs
work with the unknown client. Working with the unknown client is one
of the best reasons to have an HTTP API.k

[1] https://tools.ietf.org/html/rfc7232#section-2.2

[2] An argument could be made that this change is fixing a protocol
level bug, but I reckon that argument wouldn't fly with most people.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement/resource providers update 32

2017-08-18 Thread Chris Dent


Placement Update 32

# Meta

To avoid noise that nobody seems to care about, I've taken the liberty
of not making reference here to reviews that have received no
attention for more than a month. As we all have understandably limited
attention, that seems to be the best. I do keep track of new bugs and
new fixes, so this shouldn't lead to too much stuff getting lost in the
wilderness.

# What Matters Most

rc2 is in progress. There continue to be bugs found here and there,
mostly with handling of allocations in corner cases. A new one is with
forced live migration, found just this morning. A bug is coming, but
in the meantime there's code that demonstrates it:

https://review.openstack.org/#/c/495170/

So at this point important things to be doing are the same as last
week:

* watching for new bugs tagged placement, scheduler or resource-tracker
* inspecting the logs of tempest runs (even those that have passed)
   for unexpected log messages related to managing inventory or
   allocations
* using the code, especially to do things like resize, migrate,
   evacuate, etc
* reviewing code

Plus preparing for the PTG. See the link to the etherpad in help
wanted below.

# What's Changed

This week has been finding and fixing bugs. Bugs have been found and
fixed.

The api-ref is published!
https://developer.openstack.org/api-ref/placement/

# Help Wanted

See the beginning of this message.

Also, there's a swathe of placement related stuff on the PTG planning
etherpad. Please add to that or make some adjustments if you think
something is missing or incomplete:

https://etherpad.openstack.org/p/nova-ptg-queens

An important aspect of this is determining what kind of dependency
tree is involved with the work.

# Main Themes

## Custom Resource Classes for Ironic

It took a lot of fiddling but ironic's handling of custom resource
classes is now merged, with some caveats that had to be mentioned in
the reno.

   
https://docs.openstack.org/releasenotes/nova/unreleased.html#deprecation-notes

## Traits

Work continues apace on getting filtering by traits working:

 https://review.openstack.org/#/c/489206/

This has some overlap with shared provider handling (below).

## Shared Resource Providers

There's some support for shared resource providers on the placement
side of the scheduling equation, but the resource tracker is not yet
ready to support it. There is some work in progress, starting with
functional tests:

https://review.openstack.org/#/c/490733/

## Nested Resource Providers

This will start back up after we clean off the windscreen. The stack
begins at https://review.openstack.org/#/c/470575/5

## Docs

Henceforth if you add a new URL+method combination to placement (by
adding to ROUTE_DECLARATIONS) the docs job will fail unless you add
documentation. You _must_ document the new functionality to continue.

# Other Code

Some of the stuff below is bug fixes, created in pike, that never got
much attention, so potentially will not make it into pike now that
we're effectively in queens. It would be great if we could clear these
off the board before we get started on other major stuff, so they
aren't lingering, unloved and lonely in the corner.

* Reset client when placement endpoint not found
  https://review.openstack.org/#/c/493536/
  (This may be a backport candidate)

* Update allocation spec for alternates
  https://review.openstack.org/#/c/471927/

* functional tests for live migrate
  https://review.openstack.org/#/c/493865/

* Allow shuffling of best weighted hosts
  https://review.openstack.org/#/c/494136/

* tests for resource allocation during soft delete
  https://review.openstack.org/#/c/495159/

* WIP: resize with custom resource
  https://review.openstack.org/#/c/490461/

* replace chance with filter scheduler in func tests
  https://review.openstack.org/#/c/491529/

* https://review.openstack.org/#/c/427200/
  Add a status check for legacy filters in nova-status.

* https://review.openstack.org/#/c/469048/
  Provide more information about installing placement

* https://review.openstack.org/#/c/468928/
  Disambiguate resource provider conflict message

* https://review.openstack.org/#/c/485209/
  gabbi tests for shared custom resource class

* https://review.openstack.org/#/c/489633/
  Update RT aggregates less often

* https://review.openstack.org/#/c/492247/
  Use ksa adapter for placement in the new way

* https://review.openstack.org/#/c/483506/
  Call _update fewer times in the resource tracker
  I'm relatively certain we can't do this one because of the way the
  code is structured.

* https://review.openstack.org/#/c/468797/
  Spec for requesting traits in flavors

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ

[openstack-dev] [all] purplerbot briefly broken

2017-08-17 Thread Chris Dent


If you don't know what purplerbot is, you can stop reading if you
don't care. If you do see: https://anticdent.org/purple-irc-bot.html

Because I was lazy, I never registered the purplerbot nick with
freenode. Sometime in the recent past freenode made it so that you
had to be a registered nick in order to be able to send private
messages to at least some and maybe all users.

This meant that commands like `p!spy openstack-nova` (that and
additional commands explained in the link above) would result in
no output, no error message, nothing. The bot's server side error
log clued me in and it is now fixed.

If you've now read this and the blog posting and think "hey, I
want purplerbot in my channel", please let me know.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 33

2017-08-15 Thread Chris Dent


(If you prefer HTML: https://anticdent.org/tc-report-33.html )

Some are still in peak release cycle time and others have left for
their summer holidays, so once again: a bit of a slow week. No major
spectacles or controversies, so the best I can do is give some
summaries of the conversations in the `#openstack-tc` IRC channel.

# Continuous Deployment Continued

The Wednesday morning (at least in UTC) [office
hour](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-08-09.log.html#t2017-08-09T01:00:01)
continued the discussion about whether or not Continuous Deployment is
an integral part of the OpenStack development experience. There was
some disagreement on who had made what kind of commitment and since
when, but there's agreement in principle that:


We need a "Declare plainly the current state of CD in OpenStack"
[3RAY](http://p.anticdent.org/3RAY)


# Maintenance Status

The Thursday afternoon office hour started [a few hours
early](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-08-10.log.html#t2017-08-10T12:43:59)
with discussion about those projects that do not have PTLs. This
evolved to some discussion about the implications of single PTLs
volunteering leading to projects not needing elections. I suggested
that perhaps the job was too hard, or too difficult to manage without
sustained commitment from an employer and that such commitment was
harder to come by these days. Anyone care to dissuade me of this
opinion?

# Board Meeting Agendas

As we rolled around to [actual office hour
time](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-08-10.log.html#t2017-08-10T15:00:31)
the discussion returned to building agendas for the upcoming meetings
with the board. This was mentioned [last
week](https://anticdent.org/tc-report-32.html). My idea for a generic
retrospective was considered too open-ended and too much of an
invitation for unproductive complaining, so we discussed some ideas on
how to narrow things down.


less "it's hard" and more "it's hard because X" 
[2yZa](http://p.anticdent.org/2yZa)


This migrated to the common topic of "how do we get corporate
engagement to be strong enough and focused on the right things" which
naturally led to (because it was rc1 week) "ow, we could do with some
more CI nodes".

And finally, on today's [office
hour](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-08-15.log.html#t2017-08-15T09:04:39)
we came back to the agenda question and after discussion came to two
relatively concrete topics:

* What else should go on the top-5 list? Is what's there already
  correct?
* What is the status of the strategic goals? Are there areas that we
  are missing or need revision?

These were added to the [meeting 
agenda](https://wiki.openstack.org/wiki/Governance/Foundation/10Sep2017BoardMeeting#Leadership_Meeting_Agenda).

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] [tc] Nominatimg Eric Fried for service-types-authority core

2017-08-14 Thread Chris Dent

On Mon, 14 Aug 2017, Brant Knudson wrote:


I don't even have time to keep up with emails to -dev lately, so go ahead
and remove me as a core reviewer here.


Done. Thanks.


I likely wouldn't work on this going
forward since I'm not even sure why we have a service catalog. Shouldn't we
be using something used throughout the industry (consul, for example)?


What we name the keys in a thing which performs the function of
service (which is the primary role of service-types-authority) is
orthogonal to the technology we use to host those keys and values.

I agree that if we were starting fresh, it would make sense to use a
common or "standard" tool. But we've got a giant precedent now.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] self links include /placement?

2017-08-12 Thread Chris Dent

On Fri, 11 Aug 2017, Eric Fried wrote:


I finally got around to fiddling with the placement API today, and
noticed something... disturbing.  To me, anyway.

When I GET a URI, such as '/resource_classes', the response includes e.g.


I assume you're using ksa/requests somewhere in your stack and the
sesion is "mounted" on the service endpoint provided by the service
catalog?

If so, that means the sesion is mounted on /placement and is
prepending '/placement' to the '/resource_classes' URL you are
providing.

If not, I'd need more info and pretty much the rest of this message
is not related to your problem :)


 {u'links': [{u'href': u'/placement/resource_classes/MEMORY_MB',
u'rel': u'self'}],
  u'name': u'MEMORY_MB'},


Imagine this was HTML instead of JSON and you were using a browser,
not ksa. That's an absolute URL, the browser knows that when it sees
an absolute URL it makes the request back to the same host the
current page came from. That's standard href behavior.

It would be incorrect to have a URL of /resource_classes/MEMORY_MB
there as that would mean (using standard semantics)
host//foo.bar/resource_classes/MEMORY_MB . It could be correct to
make the href be host://foo.bar/placement/resource_classes/MEMORY_MB
but that wasn't done in the placement service so we could avoid
making any assumptions anywhere in the stack about the host or
protocol in the thing that is hosting the service (and not require
any of the middlewares that attempt to adjust the WSGI enviroment
based on headers passed along from a proxy). Also it makes for
very noisy response bodies.


If I try to GET that 'self' link, it fails (404).  I have to strip the
'/placement' prefix to make it work.


Assuming the ksa thing above is what's happening, that's because the
URL that you are sending is
/placement/placement/resource_classes/MEMORY_MB


That doesn't seem right.  Can anyone comment?


I've always found requests' mounting behavior very weird. So to me,
that you are getting 404s when trying to traverse links is expected:
you're sending requests to bad URLs. The concept of a "mount" with
an http request is pretty antithetical to link traversing,
hypertext, etc. On the other hand, none of the so-called REST APIs
in OpenStack (including placement) really expect, demand or even
work with HATEOAS, so ... ?

I'm not sure if it is something we need to account for when ksa
constructs URLs or not. It's a problem that I've also
encountered with some of the tricks that gabbi does [1]. The
proposed solution there is to sort of merge urls where a prefix is
known to be present (but see the bug for a corner case on why that's
not great).

[1] https://github.com/cdent/gabbi/issues/165


--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement/resource providers update 31

2017-08-11 Thread Chris Dent
.

* https://review.openstack.org/#/c/468797/
  Spec for requesting traits in flavors

* https://review.openstack.org/#/c/483460/
  Retry resource provider registration when session's service
  catalog does not have placement (yet)

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-wg/news

2017-08-10 Thread Chris Dent


308 Permanent Redirect
Location: /api-sig/news

Greetings OpenStack community,

As with last week, today's meeting started off with continued discussion about 
the Guided Review process that we are trying to create ahead of the Denver PTG. 
elmiko continues to shape the proposal in a review [0]. For the most part we're 
all pretty happy with it.

We then moved on to discussing the API-WG becoming API-SIG. Continued email 
discussion [4] has left us feeling like it's a good idea, so we voted to go for 
it. There were some concerns about how this might impact repository and bug 
tracking names, but we felt that we could leave those questions to be addressed 
when they become problems. Over the next few weeks you may see some changes in 
how things are named, but existing behaviors (making guidelines, sending this 
newsletter, having a weekly IRC meeting) will remain.

On the topic of guidance, there's one new work in progress [5] discouraging the 
use of extensions which add URLs to APIs or change the shape of 
representations. As the comments there indicate, this is a tricky topic when 
functionality is impacted either by policy or by the presence of different 
drivers. We haven't fully decided how we're going to explain these issues, but 
we do want to make sure that it is well known that if you are trying to create 
an interoperable API, optional URIs are a bad idea.

# Newly Published Guidelines

None this week.

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

None this week

# Guidelines Currently Under Review [3]

* Explain, simply, why extensions are bad
  https://review.openstack.org/#/c/491611/

* A (shrinking) suite of several documents about doing version and service 
discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API WG, please address your concerns in 
an email to the OpenStack developer mailing list[1] with the tag "[api]" in the 
subject. In your email, you should include any relevant reviews, links, and comments to 
help guide the discussion of the specific challenge you are facing.

To learn more about the API WG mission and the work we do, see OpenStack API 
Working Group [2].

Thanks for reading and see you next week!

# References

[0] https://review.openstack.org/#/c/487847/
[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] http://lists.openstack.org/pipermail/openstack-sigs/2017-August/35.html
[5] https://review.openstack.org/#/c/491611/


Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_wg/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ara] Best practices on what to return from REST API ?

2017-08-09 Thread Chris Dent

On Wed, 9 Aug 2017, David Moreau Simard wrote:


So I'm making what I think is good progress towards the python and
REST API implementation for ARA but now I have a question.
I've made the following API "GET" endpoints:


If people remember to actually read this and respond, you'll likely get
as many different responses to this as there are options.

Alot of it is preference, but you've mentioned that you are trying
to optimize for scale and concurrency so I think there are some things
you can do to maintain a reasonable grammar, hew to standards (both
real and the dreaded "best practice"), keep things simple, and be
fast (in aggregate). You've overcome the most important hurdle
already by enumerating your resources (as nouns):


- files (saved Ansible files)
- hosts (hosts involved in playbooks and their facts, if available)
- playbooks (data about the actual playbook and ansible parameters)
- plays (data about plays)
- results (actual results like 'ok', 'changed' as well as the data
from the Ansible module output)
- tasks (data about the task, like name, action and file path)



From there rather than /{collection} for list and

/{collection}?id={id} for single entity I'd do:

/{collection} # list
/{collection}?filter=to_smaller_list # e.g. playbook_id
/{collection}/id  # single entity

In the collection response only include the bare minimum of data, in
the single entity include:

- whatever is thought of the entity
- anything that you think of as a child, represent as links
  elsewhere
- it is makes sense to have /{collection}/id/{child} and {child} is
  actually some other /{collection}/id, consider making the child
  URI a redirect (that is /{collection/id/{child} redirects to
  /{childtype}/{child id}
- if a child is actually children then you'll want to adjust the
  grammar somewhat: /{collection}/id/{child collection}/id

This provides you with good urls for things, straightforward ways of
getting at data, and the capacity to spawn off concurrent requests
in response to an initial query.

This is essentially the same as your option 2, but with more
structure to the URLs.


--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 32

2017-08-08 Thread Chris Dent


(Rendered Version: https://anticdent.org/tc-report-32.html )

As we are still in the heady peak of the release cycle, there's not a
great deal of Technical Committee activity to report, but I've dredged
through some IRC logs to find a couple of bits that might be relevant
to the slicke of governance activity that might actually be interesting to
a broader audience.

# Continuous Deployment

In a thread on [Backwards incompatible changes based on
config](http://lists.openstack.org/pipermail/openstack-dev/2017-August/120678.html),
Joshua Harlow chose to ask one of those questions it's worth asking
every now and again: [Who are these continuous
deployers?](http://lists.openstack.org/pipermail/openstack-dev/2017-August/120705.html).
If they still exist, where are they, how do we get them to step up and
participate? If nobody is doing it any more, can we stop and make
between-release adjustments more easily?

The mailing list thread petered out, as they are wont to do, but the
topic carried over to IRC. In the [#openstack-tc
channel](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-08-07.log.html#t2017-08-07T12:47:11)
the conversation went long and wide:

* Even if there aren't people who do real deploys from trunk, the
  constraints imposed by behaving as if they do may be beneficial.
* If we stop the behavior, it is very hard to go back.
* Some people say we historically supported CD. Other people say that
  sure, some people say that, but only _some_ people.
* Or maybe the divisions are between projects?
* APIs and database migrations are in a different class from
  everything else and they themselves are different from one another.
* Whatever the behaviors have been, they've never been official (for
  all projects?).
* Whatever the behavior should be, it needs to be official (for all
  projects?).

For each of these strongly asserted opinions, there was at least one
person who disagreed, at least in part. Fun times.

As far as I could tell, there was no resolution or forward plan. The status quo
of benign (?) ambiguity will maintain for the time being. When it hurts,
something will happen.

# Forthcoming Meetings With the Board

A [Foundation list
posting](http://lists.openstack.org/pipermail/foundation/2017-August/002512.html)
asked for agenda items for the combined Board and Leadership (TC and
UC) [meeting happening before the
PTG](https://wiki.openstack.org/wiki/Governance/Foundation/10Sep2017BoardMeeting).
Details about a similar meeting [in
Sydney](https://wiki.openstack.org/wiki/Governance/Foundation/5Nov2017BoardMeeting)
are also starting to cohere.

Yet as far as I can tell, very little has yet been gathered in terms
of substantive agenda items. Given how often different members of the
community state that we need to be more activist with the board, and
the board with member companies -- especially with regard to making
sure corporate engagement is not just strong but focused in the right
place -- I'm surprised there's not more. [I fished around a bit in
IRC](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-08-08.log.html#t2017-08-08T11:14:06)
but I think there's more to be done here.

As a community we've become over adapted to the idea that people who
wish to discuss problems should come prepared with solutions, or if
not that, at least a detailed description of the failure scenario.
This is incredibly limiting. Community interactions aren't software.
We can't always create a minimal test case and iterate towards a
solution. Sometimes what we need is to sit around as peers and
reflect. In person is the ideal time for this; we don't get the
opportunity all that much.

What I'd like to see with the board, the TC, the UC, and anyone else
who wants to participate is a calm retrospective of the last three,
six or twelve months. So we can see where we need to go from here. We
can share some accolades and, if necessary, air some grievances.
Someone can say "there's a rough edge here" so someone else with a lot
of spare sandpaper they thought was useless can say "I can help with
that". We might even sing Kum ba yah.

If you're not going to be at the PTG, or you don't feel comfortable
raising an issue, feel free to contact me with anything you think is
important and relevant to the ongoing health of OpenStack and I will
try to bring it up at one of the meetings.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] [tc] Nominatimg Eric Fried for service-types-authority core

2017-08-08 Thread Chris Dent


I don't believe we have an established procedure for core
nominations in the service-types-authority group, so I'll just go
ahead and take the initiative. I think we should make Eric Fried
(efried on IRC) a core in that group. He's been doing a great deal
of work related to service types and service catalog, is around all the
time, and would be a worthy addition.

If you don't like this idea but would like to say so privately,
please contact me. Otherwise I'll give a few days and make it so.

The [tc] tag is on here because the repo is considered "owned by the
technical committee".

We may also wish to consider removing Anne Gentle and Brant Knudson
who are less available these days.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][api] Backwards incompatible changes based on config

2017-08-07 Thread Chris Dent

On Fri, 4 Aug 2017, Joshua Harlow wrote:

Any idea of who are these deployers? I think I knew once who they might have 
been but I'm not really sure anymore. Are they still doing this (and can 
afford doing it)? Why don't we hear more about them? I'd expect that 
deployers (and there associated developer army) that are trying to do this 
would be the *most* active in IRC and in the mailing list yet I don't really 
see any such activity (which either means we never break them, which seems 
highly unlikely, or that they don't communicate through the normal channels, 
ie they go through some vendor, or that they just flat out don't exist 
anymore).


I'd personally really like to know how they do it (especially if they do not 
have an associated developer army)... Because they have always been a pink 
elephant that I've heard exists 'somewhere' and they manage to make this all 
work 'somehow'.


I'd like to know this too. As Sean M says, we frequently make
compromises to support CD. In a sort of platonic sense, it is a
great goal, but it would be a lot easier to achieve if it had
participation from the people doing it.

Many deployments are several months behind. If _all_ (or even many)
deployments are that far behind, maybe we could consider saving ourselves
some pain?

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][api] Backwards incompatible changes based on config

2017-08-07 Thread Chris Dent

On Fri, 4 Aug 2017, Lance Bragstad wrote:

On 08/04/2017 03:45 PM, Kristi Nikolla wrote:

Therefore the call which now returns a 403 in master, returned a 2xx in
Ocata. So we would be fixing something which is broken on master rather
than changing a ‘contract’.


Good call - with that in mind I would be inclined to say we should fix
the issue in Pike that way we keep the 204 -> 204 behavior the same
across releases. But I'll defer to someone from the API WG just to make
sure.


I think that's fair. Given that you're not doing microversions and
you aren't inclined to commit to CD, it's a pragmatic solution to
mis-functionality that was introduced between code releases.

It also sounds like an edge case where it's very unlikely that
there's extant client code that is relying on that 403 to make
decisions on next steps.

The interop guideline is intentionally very strict and detailed, to
make it clear how much you need to think about to really do it well,
but in many cases should be considered as a tool for evaluating the
extent of the damage a change might cause, not the law.

Especially if you haven't got microversions available.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][docs] Concerns with docs migration

2017-08-02 Thread Chris Dent

On Wed, 2 Aug 2017, Stephen Finucane wrote:

Unfortunately I can't think of any cleverer way to identify these broken links
than manual inspection. I'll do this again for all the patches I've authored
and try to do them for any others I encounter.


if there's access available to the web server logs:

Get access to the server logs, grep for 404 response codes, sort by
url, order by count. Anything that has a high number should be
compared with the .htaccess file that Sean created.


This is a tricky one. Based on previous discussions with dhellmann, the plan
seems to be to replace any references to 'nova xxx' or 'openstack xxx' commands
(i.e. commands using python-novaclient or python-openstackclient) in favour of
'curl'-based requests. The idea here is that the Python clients are not the
only clients available, and we shouldn't be "mandating" their use by
referencing them in the docs. I get this, though I don't fully agree with it
(who really uses curl?). In any case though, this would surely be a big rewrite
and, per your concerns in #2 above,  doesn't sound like something we should be
doing in Pike. I guess a basic import is the way to go for now. I'll take care
of this.


As much as I think using the raw HTTP is an important learning
tool, using curl in the docs will make the docs very hard to
comprehend.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 31

2017-08-01 Thread Chris Dent


(Rendered version: https://anticdent.org/tc-report-31.html )

Writing these reports is somewhat difficult when there hasn't been a
meeting and we're in a crunch time in the release cycle. The things
that have happened that are related to TC activity are disjointed, a
bit hard to track, and hard to summarize. There have, however, been a
few blips in IRC traffic in `#openstack-tc` and in the [governance
repo](https://review.openstack.org/#/q/project:openstack/governance+status:open),
so I'll attempt to apply my entirely subjective eye to those things.

# Tags, Projects, Deliverables, Oh My

The review to add the [api interop assert
tag](https://review.openstack.org/#/c/482759/) to ironic and nova has
foundered a bit on the diversity of opinions on not just what that
specific tag means, but what tags in general mean, and who uses them.
There's plenty of conversation on that review, as well as most
recently [some
chatter](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-08-01.log.html#t2017-08-01T09:30:55)
in this morning's office hour.

One outcome of the discussion (in its many places) is that the primary
use of the tags is to engineer a technical solution to a marketing
problem, in the form of the [Project
Navigator](https://www.openstack.org/software/project-navigator/). As
such it tends to get messy at the interface and difference between the
boundaries and identifiers that matter to the people developing the
projects and the boundaries and identifiers that matter to the people
promoting or using the project navigator. It can also mean that anyone
who is aware of the marketing value of the tags may experince
conflicts when considering any of the tags which may be voluntarily
asserted.

The current case in point is that if asserting the api
interoperability tag is presented as a sign of maturity in the
navigator, any project that considers itself mature would like to not
be penalized in the event that they are mature in a way that is not
in alignment with the tag.

# The Top 5 List, Barbican, Designate, Castellan

Another topic from [this morning's office
hour](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-08-01.log.html#t2017-08-01T09:01:39)
was discussing whether it might make sense to put contribution to the
barbican and designate projects on the [Top 5
List](https://governance.openstack.org/tc/reference/top-5-help-wanted.html)
and/or make them [base
services](https://governance.openstack.org/tc/reference/base-services.html)
(there's something of a chicken and egg problem there).

[Later in the
day](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-08-01.log.html#t2017-08-01T13:23:01)
the topic was picked back up, with dims suggesting that castellan
should be the base requirement, not barbican. Castellan, however, does
not yet satisfy all the needs designate would have, if doing DNSSEC.

Further discussion required, as far as I could tell.

# Increasing Engagement From And With People In China

[Last Thursday's office
hour](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2017-07-27.log.html#t2017-07-27T15:00:39)
started with a brief report about China where ttx and many others had
been at [OpenStack Days China](http://openstackdaychina.org/). This
led to a discussion about the many difficulties present in synchronous
communication technologies including all forms of chat, audio and
video. Each presents difficulties either in terms of comprehension,
limited access, or issues with latency.


yeh, we need to get people less al[l]ergic to doing real conversations
in email :) [sdague](http://p.anticdent.org/2Hac)


(Words to live by.)

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][qa][placement] more gabbi with tempest

2017-07-31 Thread Chris Dent


Over the weekend I spent some time working on my experiments using
gabbi[1] with tempest. I had previously got gabbi-tempest[2]
working, based on some work by sileht, but it was only addressing a
single service at a time. For the sake of placement-related tests,
it's useful to be able to talk to the compute api and the placement
api at the same time. Now it does.

I've just now used it to confirm a placement bug [3] with this gabbi
file:

https://github.com/cdent/gabbi-tempest/blob/master/gabbi_tempest/tests/scenario/gabbits/multi/base.yaml

It's still rough around the edges, but it has proven useful and
should be doubly so in the face of multiple nodes. Especially useful
to me is how visible it makes the various APIs and the interactions
thereof. Writing the tests without a client is _very_ informative.

If you'd like to help make it better, find me or just go ahead and
make a pull request. At some point it may be interesting to explore
the option of "put a gabbit in dir X" and tempest will run it for
you.

[1] https://gabbi.readthedocs.io/en/latest/
[2] https://github.com/cdent/gabbi-tempest
[3] https://bugs.launchpad.net/nova/+bug/1707252

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] placement/resource providers update 30

2017-07-28 Thread Chris Dent

On Fri, 28 Jul 2017, Chris Dent wrote:


I thought there were going to be some changes to the resource tracker
made overnight, related to ensuring allocations on the source and
destination of a move are managed correctly, but I don't see them yet.
If they get pushed up today can someone followup here with links,
please? Or if it was decided that nothing was required can someone
explain why?


It's in progress and will be attached to this bug:

https://bugs.launchpad.net/nova/+bug/1707071

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement/resource providers update 30

2017-07-28 Thread Chris Dent
 app. There's some active discussion on whether the
  solution in mind is the right solution, or even whether the bug is
  a bug (it is!).

* https://review.openstack.org/#/c/470578/
  Add functional test for local delete allocations

* https://review.openstack.org/#/c/468797/
   Spec for requesting traits in flavors

* https://review.openstack.org/#/c/480379/
 ensure shared RP maps with correct root RP
 (Some discussion on this one what the goal is and whether the
 approach is the right one.)

* https://review.openstack.org/#/c/483460/
Retry resource provider registration when session's service
catalog does not have placement (yet)

* https://review.openstack.org/#/c/488363/
  quash unicode warning in shared providers

* https://review.openstack.org/#/c/484828/
  accept any scheduler driver endpoint

# End

Thanks for reading this far. I hope you reviewed some of the links
above. If not, please do. Otherwise no prize for you.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 30

2017-07-25 Thread Chris Dent


(Also at: https://anticdent.org/tc-report-30.html )

Welcome to this week's TC Report. I think this report will be somewhat
short. There's no meeting scheduled for today, a fair few TC members
are in China for an event while others are embroiled in getting things
done before milestone 3.

The main burst of TC-related activity was a "non-boring" office hour
[last Thursday](http://p.anticdent.org/wAq). A conversation that began
as a sharing of opinions on the new SIGs framework quickly expanded to
cover many questions including how do we deal with the fact there is
more work than there is people do it, how do we ensure people don't
burn out and that participating in OpenStack is actually a positive
experience, and how do we increase the depth and breadth of
contribution (and is such contribution even a good thing) from
corporations who make money because of OpenStack.

This all eventually led to the idea that it might be a good idea to do
some interview-driven research of the difficulties faced by
contributors (of all sorts, not only developers) who want to be and
feel successful in the community but sometimes do not. This could
provide some useful data to be used by the TC, the UC, the board,
projects, and employers to make some adjustments.

For many people the difficulties or challenges are either so obvious
or been around for so long that the idea of doing interviews probably
feels wasteful (in the sense of "OMG, we already know, just _do_
something!"). The problem is that without a corpus of analysed data
there is no third party authoritative thing that can be pointed at in
discussion and we'll simply become embroiled in the same old
arguments.
--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] placement/resource providers update 29

2017-07-24 Thread Chris Dent

On Sat, 22 Jul 2017, Matt Riedemann wrote:


On 7/21/2017 6:54 AM, Chris Dent wrote:

## Custom Resource Classes for Ironic

A spec for custom resource classes is being updated to reflect the
need to update the flavor and allocations of a previously allocated
ironic node that how has a custom resource class (such as
CUSTOM_SILVER_IRON):

https://review.openstack.org/#/c/481748/

The implementation of those changes has started at:

https://review.openstack.org/#/c/484949/

That gets the flavor adjustment. Do we also need to do allocation
cleanups or was that already done at some point in the past?


That's done:

https://review.openstack.org/#/c/484935/


It's good that that's done, but that's not quite what I meant. That
will override stuff from elsewhere in the flavor with what's in extra
specs to create a reasonable allocation record.

I meant the case where an existing ironic instance was updated on
the ironic side to be CUSTOM_IRON_GOLD (or whatever) and needs to
have it's previous allocations of VCPU: 2, DISK_GB: 1024, MEMORY_MB:
1024 to replace those with CUSTOM_IRON_GOLD: 1?

a) Is that even a thing? 
b) Do we need to do it with some new code or is it halready

   happening by way of the periodic job?

I guess the code that Ed's working on at

https://review.openstack.org/#/c/484949

need to zero out VCPU etc in the extra specs so that the eventual
allocation record is created in 484935 is correct?

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement/resource providers update 29

2017-07-21 Thread Chris Dent
stack.org/#/c/468797/
  Spec for requesting traits in flavors

* https://review.openstack.org/#/c/480379/
ensure shared RP maps with correct root RP
(Some discussion on this one what the goal is and whether the
approach is the right one.)

* https://review.openstack.org/#/c/483506/
   Call _update fewer times in the resource tracer

* https://review.openstack.org/#/c/483460/
   Retry resource provider registration when session's service
   catalog does not have placement

* https://review.openstack.org/#/c/452006/
   A functional test to confirm that migration between two different
   cells is not allowed. Included here because it uses the
   PlacementFixture and may be experiencing the bug that
   https://review.openstack.org/#/c/483564/ is trying to fix.


--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-wg/news

2017-07-20 Thread Chris Dent


Greetings OpenStack community,

Most of the time in today's meeting was dedicated to discussing ideas for the 
Guided Review Process [0] (that link will expire in favor of a gerrit review 
soon) we are considering for the PTG. The idea is that projects which are 
enmeshed in debate over how to correctly follow the guidelines in their APIs 
can come to a process of in-person review at the PTG. All involved can engage 
in the discussion and learn. The exact mechanics are still being worked out. 
The wiki page at [0] is a starting point which will be reviewed and revised on 
gerrit. Our discussion today centered around trying to make sure we can 
actually productively engage with one another.

There's been little activity with regard to guidelines or bugs recently. This 
is mostly because everyone is very busy with other responsibilities. We hope 
things will smooth out soon.

# Newly Published Guidelines

None this week.

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

None this week

# Guidelines Currently Under Review [3]

* A (shrinking) suite of several documents about doing version and service 
discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API WG, please address your concerns in 
an email to the OpenStack developer mailing list[1] with the tag "[api]" in the 
subject. In your email, you should include any relevant reviews, links, and comments to 
help guide the discussion of the specific challenge you are facing.

To learn more about the API WG mission and the work we do, see OpenStack API 
Working Group [2].

Thanks for reading and see you next week!

# References

[0] https://wiki.openstack.org/wiki/API_Working_Group/Guided_Review_Process
[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z


Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_wg/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] scheduling with custom resouce classes

2017-07-19 Thread Chris Dent

On Wed, 19 Jul 2017, Balazs Gibizer wrote:

I added more info to the bug report and the review as it seems the test is 
fluctuating.


(Reflecting some conversation gibi and I have had in IRC)

I've made a gabbi-based replication of the desired functionality. It
also flaps, with a >50% failure rate:
https://review.openstack.org/#/c/485209/

Sorry copy pasted the wrong link, the correct link is 
https://bugs.launchpad.net/nova/+bug/1705231


This has been updated (by gibi) to show that the generated SQL is
different between the failure and success cases.


--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 29

2017-07-19 Thread Chris Dent


(Blog version at https://anticdent.org/tc-report-29.html )

This TC Report is a bit late. Yesterday I was attacked by an oyster.

This week had no meeting, so what follows is a summary of various
other TC related (sometimes only vaguely related) activity.

# Vision

The [TC
Vision](https://governance.openstack.org/tc/resolutions/20170404-vision-2019.html)
has been merged, presented in a way that makes sure that it's easy and
desirable to create a new vision at a later date to respond to
changing circumstances. There were concerns during the review process
that the document as is does not take into account recent changes in
the corporate and contributor community surrounding OpenStack. The
consensus conclusion, however, was that the goals stated in the vision
remain relevant and productive work has already begun.

# Hosted Projects

The conversation about [hosted
projects](http://lists.openstack.org/pipermail/openstack-dev/2017-July/119205.html)
continues, mostly in regard to that question with great stamina: Is
OpenStack Infrastructure as a Service or something more encompassing
of all of cloud? In either case what does it take for something to be
a complete IAAS or what is "cloud"? There was a useful posting from
Zane pointing out that the [varied
assumptions](http://lists.openstack.org/pipermail/openstack-dev/2017-July/119736.html)
people bring to the discussion are a) varied, b) assumptions.

It feels likely that these discussion will become more fraught during
times of pressure but have no easy answers. As long as the discussion
don't devolve into name calling, I think each repeat round is useful
as it brings new insights to the old hands and keeps the new hands
informed of stuff that matters. Curtailing the discussion simply
because we have been over it before is disrespectful to the people who
continue to care and to the people for whom it is new.

I still think we haven't fully expressed the answers to the questions
about the value and cost that any project being officially in
OpenStack has for that project or for OpenStack. I'm not asserting
anything about the values or the costs; knowing the answers is simply
necessary to have a valid conversation.

# Glare

The conversation about [Glare becoming
official](http://lists.openstack.org/pipermail/openstack-dev/2017-July/119442.html)
continued, but more slowly than before. The plan at this stage is to
discuss the issues in person at the PTG where the Glare project will
have some space. [ttx made a brief
summary](http://lists.openstack.org/pipermail/openstack-dev/2017-July/119818.html);
there's no objection to Glare becoming official unless there is some
reason to believe it will result in issues for Glance (which is by no
means pre-determined).

# SIGs

The new openstack-sigs mailing list was opened with a deliberately
provocative thread on [How SIG Work Gets
Done](http://lists.openstack.org/pipermail/openstack-sigs/2017-July/03.html).
This resulted in comments on how OpenStack work gets done, how open
source work gets done, and even whether open source behaviors fully apply
in the OpenStack context.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] scheduling with custom resouce classes

2017-07-19 Thread Chris Dent

On Wed, 19 Jul 2017, Balazs Gibizer wrote:


We are trying to get some help from the related functional test [5] but
honestly we still need some time to digest that LOCs. So any direct
help is appreciated.


I managed to create a functional test case that reproduces the above problem 
https://review.openstack.org/#/c/485088/


Excellent, thank you. I was planning to look into repeating this
today, will first look at this test and see what I can see. Your
experimentation is exactly the sort of stuff we need right now, so
thank you very much.


BTW, should I open a bug for it?


I also filed a bug so that we can track this work 
https://bugs.launchpad.net/nova/+bug/1705071


I guess Jay and Matt have already fixed a part of this, but not the
whole thing.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement/resource providers update 28

2017-07-14 Thread Chris Dent


Placement update 28.

# What Matters Most

Still claims in the scheduler. It's getting closer, the current
hiccup is dealing with things like a resize on the same host. Diligent
work and discussion in progress. The related changes are in the stack
near:

https://review.openstack.org/#/c/483564/

# What's Changed

Lots of refactoring in the scheduler related unit tests.

# Help Wanted

Areas where volunteers are needed.

* General attention to bugs tagged placement:
  https://bugs.launchpad.net/nova/+bugs?field.tag=placement

# Main Themes

## Claims in the Scheduler

As linked above, the claims in the scheduler work is in this stack:

https://review.openstack.org/#/c/483564/

## Custom Resource Classes for Ironic

A spec for custom resource classes is being updated to reflect the
need to update the flavor and allocations of a previously allocated
ironic node that how has a custom resource class (such as
CUSTOM_SILVER_IRON):

https://review.openstack.org/#/c/481748/

Work has started on the implementation of that, today, but as far as I
can see nothing is up for review yet, will be soon. This functionality
needs to be in place or we will be continuing to manage ironic
inventory poorly, for another entire cycle.

## Traits

The concept of traits now exists in the placement service, but
filtering resource providers on traits is in flux. With the advent
of /allocation_candidates as the primary scheduling interface, that
needs to support traits. Work for that is in a stack starting at

 https://review.openstack.org/#/c/478464/

It's not yet clear if we'll want to support traits at both
/allocation_candidates and /resource_providers. I think we should,
but the immediate need is on /allocation_candidates.

There's some proposed code to get the latter started:

 https://review.openstack.org/#/c/474602/

## Shared Resource Providers

Support for shared resource providers is "built in" to the
/allocation_candidates concept and one of the drivers for having it.

There was a thread on the dev list recently about using them with
custom resource classes which may be instructive:

http://lists.openstack.org/pipermail/openstack-dev/2017-July/119648.html

## Nested Resource Providers

Work continues on nested resource providers.

   
https://review.openstack.org/#/q/status:open+topic:bp/nested-resource-providers

The need with these is simply more review, but they are behind
claims in priority.

## Docs

Lots of placement-related api docs have merged or are in progress:

https://review.openstack.org/#/q/status:open+topic:cd/placement-api-ref

Setting up the official publishing job for the api ref is on hold
until the content has been migrated to the locations specified by the
docs migration that is currently in progress:


http://specs.openstack.org/openstack/docs-specs/specs/pike/os-manuals-migration.html

Some changes have been proposed to document the scheduler's
workflow, including visual aids, starting at:

 https://review.openstack.org/#/c/475810/

# Other Code/Specs

* https://review.openstack.org/#/c/472378/
A proposed fix to using multiple config locations with the
placement wsgi app. There's some active discussion on whether the
solution in mind is the right solution, or even whether the bug is
a bug (it is!).

* https://review.openstack.org/#/c/470578/
Add functional test for local delete allocations

* https://review.openstack.org/#/c/427200/
   Add a status check for legacy filters in nova-status.

* https://review.openstack.org/#/c/469048/
 Provide more information about installing placement

* https://review.openstack.org/#/c/468928/
 Disambiguate resource provider conflict message

* https://review.openstack.org/#/c/468797/
 Spec for requesting traits in flavors

* https://review.openstack.org/#/c/480379/
   ensure shared RP maps with correct root RP
   (Some discussion on this one what the goal is and whether the
   approach is the right one.)

* https://review.openstack.org/#/c/483506/
  Call _update fewer times in the resource tracer

* https://review.openstack.org/#/c/483460/
  Retry resource provider registration when session's service
  catalog does not have placement

* https://review.openstack.org/#/c/452006/
  A functional test to confirm that migration between two different
  cells is not allowed. Included here because it uses the
  PlacementFixture and may be experiencing the bug that
  https://review.openstack.org/#/c/483564/ is trying to fix.

# End

Thanks for reading this far. Now please go review some of the things
linked above. Your prize is a reservation on a delicate thrown
porcelain cup or bowl, hand made in Cornwall after I retire.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage

Re: [openstack-dev] [nova][placement] scheduling with custom resouce classes

2017-07-13 Thread Chris Dent

On Thu, 13 Jul 2017, Balazs Gibizer wrote:

/placement/allocation_candidates?resources=CUSTOM_MAGIC%3A512%2CMEMORY_MB%3A64%2CVCPU%3A1" 
but placement returns an empty response. Then nova scheduler falls back to 
legacy behavior [4] and places the instance without considering the custom 
resource request.


As far as I can tell at least one missing piece of the puzzle here
is that your MAGIC provider does not have the
'MISC_SHARES_VIA_AGGREGATE' trait. It's not enough for the compute
and MAGIC to be in the same aggregate, the MAGIC needs to announce
that its inventory is for sharing. The comments here have a bit more
on that:


https://github.com/openstack/nova/blob/master/nova/objects/resource_provider.py#L663-L678

It's quite likely this is not well documented yet as this style of
declaring that something is shared was a later development. The
initial code that added the support for GET /resource_providers
was around, it was later reused for GET /allocation_candidates:

https://review.openstack.org/#/c/460798/

The other thing to be aware of is that GET /allocation_candidates is
in flight. It should be stable on the placement service side, but the
way the data is being used on the scheduler side is undergoing
change as we speak:

https://review.openstack.org/#/c/482381/

Then I tried to connect the compute provider and the MAGIC provider to the 
same aggregate via the placement API but the above placement request still 
resulted in empty response. See my exact steps in [5].


This still needs to happen, but you also need to put the trait
mentioned above on the magic provider, the docs for that are in progress
on this review

https://review.openstack.org/#/c/474550/

and a rendered version:


http://docs-draft.openstack.org/50/474550/8/check/gate-placement-api-ref-nv/2d2a7ea//placement-api-ref/build/html/#update-resource-provider-traits


Do I still missing some environment setup on my side to make it work?
Is the work in [1] incomplete?
Are the missing pieces in [2] needed to make this use case work?

If more implementation is needed then I can offer some help during Queens 
cycle.


There's definitely more to do and your help would be greatly
appreciated. It's _fantastic_ that you are experimenting with this
and sharing what's happening.

To make the above use case fully functional I realized that I need a service 
that periodically updates the placement service with the state of the MAGIC 
resource like the resource tracker in Nova. Is there any existing plans 
creating a generic service or framework that can be used for the tracking and 
reporting purposes?


As you've probably discovered from your experiments with curl,
updating inventory is pretty straightforward (if you have a TOKEN)
so we decided to forego making a framework at this point. I had some
code long ago that demonstrated one way to do it, but it didn't get
any traction:

https://review.openstack.org/#/c/382613/

That tried to be a simple python script using requests that did the
bare minimum and would be amenable to cron jobs and other simple
scripts.

I hope some of the above is helpful. Jay, Ed, Sylvain or Dan may come
along with additional info.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 28

2017-07-11 Thread Chris Dent
ail/openstack-dev/2017-June/119075.html)
instead of limiting open access for people who want to create more openness.

Does anyone recall where this topic landed, or if it hasn't yet landed, does
anyone have good ideas on how to get it to land?

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glare][TC] Application for inclusion of Glare in the list of official projects

2017-07-11 Thread Chris Dent

On Tue, 11 Jul 2017, Mikhail Fedosin wrote:


For example, deactivating an image in Glance looks like *POST*
/v2/images/{image_id}/actions/deactivate with empty body.
At one time, Chris Dent advised us to avoid such decisions, and simply
change the status of the artifact to 'deactivated' using *PATCH*, which we
did.


Indeed I did. The point of that was to avoid "actions" style URLs on
resources that already have that information in their
representations so that the interface is more RESTful and doesn't
have a profusion of verby URLs. The other option is to PUT a full
representation with the status changed.

But that's not the point here. The issue is that in order for Glare
to provide a seamless compatibility layer with Glance it needs to be
able to present a facade which is _identical_ to Glance. Not mostly
the same but with improvement, but identical with all the same
warts.

This provides a critical part in a smooth migration plan. As people
become aware of glare being there, they can start taking advantage
of the new features in their new code or code that they are ready to
update, without having to update old stuff.

If Glare has fairly good separation between the code that handles
URLs and processes bodies (in and out) and the code that does stuff
with those bodies[1], it ought to be somewhat straightforward to
create such a facade.

[1] Not gonna use model, view, controller here; those terms have
never been accurate for web-based APIs.


--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glare][TC] Application for inclusion of Glare in the list of official projects

2017-07-10 Thread Chris Dent

On Mon, 10 Jul 2017, Monty Taylor wrote:

However, as you said, conceptually the calls are very similar so making an 
API controller that can be registered in the catalog as "image" should be 
fairly easy to do, no?


In general I think we should always keep this strategy in mind when
we are considering leapfrogging technologies for some reason. And we
should consider leapfrogging more often.

(Note that I'm using "consider" and not "choose" very much on
purpose.)

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Add API Sample tests for Placement APIs

2017-07-07 Thread Chris Dent

On Fri, 7 Jul 2017, Chris Dent wrote:


Thanks for sticking with this conversation and making your goals
clear.


Both of the pull requests were merged and there's now a new version
of gabbi, 1.35.0, at: https://pypi.python.org/pypi/gabbi

https://gabbi.readthedocs.io/en/latest/release.html#id1

I'll work on the requirements change asap so we can start using some
of this stuff with placement.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement/resource providers update 27

2017-07-07 Thread Chris Dent
gacy filters in nova-status.

* https://review.openstack.org/#/c/469048/
Provide more information about installing placement

* https://review.openstack.org/#/c/468928/
Disambiguate resource provider conflict message

* https://review.openstack.org/#/c/468797/
Spec for requesting traits in flavors

* https://review.openstack.org/#/c/480379/
  ensure shared RP maps with correct root RP
  (Some discussion on this one what the goal is and whether the
  approach is the right one.)

# End

That's all I've got this week, next week I should be a bit more
caught up and aware of any bits I've missed. No prize this week, but
maybe next week.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Add API Sample tests for Placement APIs

2017-07-07 Thread Chris Dent

On Fri, 7 Jul 2017, Shewale, Bhagyashri wrote:


Imagine one api with version x returning 5 attributes in a response and 
micro-version x.1 returning 6 attributes in a response and for some reasons 
there is a regression
and the micro-version x starts returning 6 attributes instead of 5, then I 
expect the tests should fail but as per the current tests,
it won't be detected as we are only verifying only the required attributes from 
that specific micro-version.


For this specific case you could count the number of returned
attributes using the `len` functionality. If you wanted.

I've gone ahead, however, and implemented the read from disk
functionality that we've discussed in earlier messages. If you and
others could have a look at

https://github.com/cdent/gabbi/pull/216

that would be great. If we decide it is suitable I'll merge it and
make a new release. There's another change pending which might also
be useful:

https://github.com/cdent/gabbi/pull/215

Thanks for sticking with this conversation and making your goals
clear.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-wg/news

2017-07-06 Thread Chris Dent


Greetings OpenStack community,

For the first time in a while, the entire API-WG core was present, along with a 
fly by from mordred. The main topic of conversation was related to a proposed 
cloud profile document [4] including a static version of a thing like the 
service catalog. More discussion and thought will happen on the review but the 
gist is that an authenticated potential user of the cloud should be able to 
inspect what's available. There are no guidelines ready for freeze or merge 
this week.

# Newly Published Guidelines

None this week.

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

None this week

# Guidelines Currently Under Review [3]

* A (shrinking) suite of several documents about doing version and service 
discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API WG, please address your concerns in 
an email to the OpenStack developer mailing list[1] with the tag "[api]" in the 
subject. In your email, you should include any relevant reviews, links, and comments to 
help guide the discussion of the specific challenge you are facing.

To learn more about the API WG mission and the work we do, see OpenStack API 
Working Group [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] https://review.openstack.org/#/c/459869/


Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_wg/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Add API Sample tests for Placement APIs

2017-07-05 Thread Chris Dent


Thanks for providing a bit more information on your goals, that
helps shape my answers a bit more usefully. See within.

On Wed, 5 Jul 2017, Shewale, Bhagyashri wrote:


So I'd say the api is verified. What is missing, and could be useful, is using those 
tests to get accurate and up to date representations of the JSON request and response 
bodies. If that's something we'd like to pursue as I said in my other message the 
'verbose' functionality that can be provided in gabbi->>based tests should be 
able to help.


I have checked with the 'verbose' functionality, but that doesn't verify the 
entire response, it's simply prints header and response on console.


That's correct. My suggestion to use the 'verbose' functionality was
if your main goal was to create samples to be used in documentation:
a suite of gabbi tests can run which output those samples as needed,
wherever they need to go.


On the current master code, few of the gabbi tests verify the entire response 
and the remaining verify’s only specific attributes from the response.


This is mostly intentional. It doesn't make sense to me that we
would verify serialization code at the level of API tests.

The difference you see in the extent to which each tests validates a
response is the result of different people writing the tests.


So instead of verifying each and every key-value pair from the response object, 
it would nice if gabbit can add a support to
accept response.json file and during execution of the test, it can verify 
whether all key-value pairs response.json file are matching with the actual 
response.


While I disagree with doing this (it makes the tests more fragile,
requires more pieces to be changed when things are changed, and tests
the serialization code in the api tests, rather than the API[1]) the
functionality that you're describing can easily exist in gabbi so if
you and others decide that it is useful it can be done, either in
gabbi itself or in a custom content handler[2]. Is it your
intent to compare the abstract nestable object represented by the
JSON, or the string? If it's the latter there's existing code out
there that does that[3] but that's probably not a good choice as it
will break as soon as the format of the JSON changes for some
reason.

(I'm happy to write the code that does full object comparison if we
decide it's desirable or help land a pull request.)

[1] From https://gabbi.readthedocs.io/en/latest/jsonpath.html
This is not a technique that should be used frequently as it can
lead to difficult to read tests and it also indicates that your
gabbi tests are being used to test your serializers and data
models, not just your API interactions.

[2] https://gabbi.readthedocs.io/en/latest/handlers.html
And an example:

[3] 
https://github.com/hogarthww/gabbi-tools/blob/master/src/gabbi_tools/response_handlers.py

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Move away from meeting channels

2017-06-26 Thread Chris Dent

On Mon, 26 Jun 2017, Flavio Percoco wrote:


So, should we let teams to host IRC meetings in their own channels?


Yes.


Thoughts?


I think the silo-ing concern is, at least recently, not relevant on
two fronts: IRC was never a good fix for that and silos gonna be
silos.

There are so many meetings and so many projects there already are
silos and by encouraging people to use the mailing lists more we are
more effectively enabling diverse access than IRC ever could,
especially if the IRC-based solution is the impossible "always be on
IRC, always use a bouncer, always read all the backlogs, always read
all the meeting logs".

The effective way for a team not to be a silo is for it to be
better about publishing accessible summaries of itself (as in: make
more email) and participating in cross project related reviews. If
it doesn't do that, that's the team's loss.

Synchronous communication is fine for small groups of speakers but
that's pretty much where it ends.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Add API Sample tests for Placement APIs

2017-06-23 Thread Chris Dent

On Thu, 22 Jun 2017, Shewale, Bhagyashri wrote:


* who or what needs to consume these JSON samples?

The users of placement API can rely on the request/response for different 
supported placement versions based on some tests running on the OpenStack CI 
infrastructure.
Right now, most of the placement APIs are well documented and others are in 
progress but there are no tests to verify these APIs.


Either we are misunderstanding each other, or you're not aware of
what the gabbi tests are doing. They verify the placement API and
provide extensive coverage of the entire placement HTTP framework,
including the accuracy of response codes in edge cases not on the
"happy path". Coverage is well over 90% for the group of files in
nova/api/openstack/placement (except for the wsgi deployment script
itself) when the
nova/tests/functional/api/openstack/placement/test_placement_api.py
functionl tests runs all the gabbi files in
nova/tests/functional/api/openstack/placement/gabbits/.

So I'd say the api is verified. What is missing, and could be
useful, is using those tests to get accurate and up to date
representations of the JSON request and response bodies. If that's
something we'd like to pursue as I said in my other message the
'verbose' functionality that can be provided in gabbi-based tests
should be able to help.


We would like to write new functional test to consume these json samples to 
verify each placement API for all supported versions.


Those gabbi files also test functionality at micorversion
boundaries.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] Add API Sample tests for Placement APIs

2017-06-22 Thread Chris Dent

On Wed, 21 Jun 2017, Shewale, Bhagyashri wrote:


I  would like to write functional tests to check the exact req/resp for each 
placement API for all supported versions similar
to what is already done for other APIs under 
nova/tests/functional/api_sample_tests/api_samples/*.
These request/response json samples can be used by the api.openstack.org and in 
the manuals.

There are already functional tests written for placement APIs under 
nova/tests/functional/api/openstack/placement,
but these tests doesn't check the entire HTTP response for each API for all 
supported versions.

I think adding such functional tests for checking response for each placement 
API would be beneficial to the project.
If there is an interest to create such functional tests, I can file a new 
blueprint for this activity.


At Matt points out elsewhere, we made a choice to not use the
api_samples format when developing placement. There were a few
different reasons for this:

* we wanted to limit the amount of nova code used in placement, to
  ease the eventual extraction of placement to its own code
  repository

* some of us wanted to use gabbi [1] as the testing framework as it is
  nicely declarative [2] and keeps the request and response in the same
  place

* we were building the api framework from scratch and doing what
  amounts to test driven development [2] using functional tests and
  gabbi works well for that

* testing the full response isn't actually a great way to test an
  API in a granular way; the info is useful to have but it isn't a
  useful test (from a development standpoint)

But, as you've noted, this means there isn't a single place to go to
see a collection of a full request and response bodies. That
information can be extracted from the gabbi tests, but it's a) not
immediately obvious, b) requires interpretation.

Quite some time ago I started a gabbi-based full request and
response suite of tests [3] but it was never finished and now is
very out of date.

If the end goal is to have a set of documents that pair all the
possible requests (with bodies) with all possible responses (with
bodies), gabbi could easily create this in its "verbose" mode [4]
when run as functional tests or with the gabbi-run [5] command that
can run against a running service.

So I would suggest that we more completely explain the goal or goals
that you're trying to satisfy and then see how we can use the
existing tooling to fill them. Some questions related to that:

* who or what needs to consume these JSON samples?
* do they need to be tests against current code, or are they
  primarily reference info?
* what are the value propositions associated with fully validating
  the structure and content of the response bodies?

We can relatively easily figure out some way to drive gabbi to
produce the desired information, but first we want to make sure that
the information produced is going to be the right info (that is,
will satisfy the needs of whoever wants it).

I am, as Matt mentioned, on holiday at the moment so my response to
any future messages may be delayed, but I'll catch up as I'm able.

[1] https://gabbi.readthedocs.io/en/latest/
[2] 
https://github.com/openstack/nova/tree/master/nova/tests/functional/api/openstack/placement/gabbits
[3] https://review.openstack.org/#/c/370204/
[4] verbose mode can print out request and response headers, bodies,
or both. If the bodies are JSON, it will be pretty printed.
[5] https://gabbi.readthedocs.io/en/latest/runner.html

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-wg/news

2017-06-15 Thread Chris Dent


Greetings OpenStack community,

Today's meeting was short and sweet. We merged some frozen reivews (see below) and 
determined that other pending reviews are still pending. The survey to get some feedback 
on how to express the earliest possible date a microversion might raise didn't get a ton 
of votes, but those votes that did happen suggest that the original 
"not_before" is the least worst choice.

# Newly Published Guidelines

The following reviews have all merged

* Add guideline about consuming endpoints from catalog
  https://review.openstack.org/#/c/462814/

* Add support for historical service type aliases
  https://review.openstack.org/#/c/460654/

* Describe the publication of service-types-authority data
  https://review.openstack.org/#/c/462815/

They all modified the same document, resulting in: 
http://specs.openstack.org/openstack/api-wg/guidelines/consuming-catalog.html

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

None this week.

# Guidelines Currently Under Review [3]

* Microversions: add next_min_version field in version body
  https://review.openstack.org/#/c/446138/

* A (shrinking) suite of several documents about doing version discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API WG, please address your concerns in 
an email to the OpenStack developer mailing list[1] with the tag "[api]" in the 
subject. In your email, you should include any relevant reviews, links, and comments to 
help guide the discussion of the specific challenge you are facing.

To learn more about the API WG mission and the work we do, see OpenStack API 
Working Group [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_wg/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-15 Thread Chris Dent

On Thu, 15 Jun 2017, Chris Dent wrote:


On Thu, 15 Jun 2017, Thierry Carrez wrote:


I'd like to propose that we introduce a new concept: "OpenStack-Hosted
projects". There would be "OpenStack projects" on one side, and
"Projects hosted on OpenStack infrastructure" on the other side (all
still under the openstack/ git repo prefix). We'll stop saying "official
OpenStack project" and "unofficial OpenStack project". The only
"OpenStack projects" will be the official ones. We'll chase down the
last mentions of "big tent" in documentation and remove it from our
vocabulary.


I agree that something needs to change, but also agree with some of
the followups that the distinction you're proposing isn't
particularly memorable.


I should also say that despite my previous comments, discussion
resolving those issues should not delay sanitizing the term "big
tent".

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-15 Thread Chris Dent

On Thu, 15 Jun 2017, Thierry Carrez wrote:


I'd like to propose that we introduce a new concept: "OpenStack-Hosted
projects". There would be "OpenStack projects" on one side, and
"Projects hosted on OpenStack infrastructure" on the other side (all
still under the openstack/ git repo prefix). We'll stop saying "official
OpenStack project" and "unofficial OpenStack project". The only
"OpenStack projects" will be the official ones. We'll chase down the
last mentions of "big tent" in documentation and remove it from our
vocabulary.


I agree that something needs to change, but also agree with some of
the followups that the distinction you're proposing isn't
particularly memorable. Nor, if we put ourselves in the shoes of an
outside observer, is "OpenStack project" versus "hosted on OpenStack
infrastructure" particularly meaningful. From many angles it all
looks like OpenStack.

Part of the issue is that the meaning and value of being  an
"OpenStack project" (an "official" one) is increasingly diffuse.
I suspect that if we could make that more concrete then things like
names would be easier to decide. Some things we might ask ourselves
to help clarify the situation include (as usual, some of these
questions may have obvious answers, but enumerating them can help
make things explicit):

* What motivates a project to seek status as an OpenStack project?
  * What do they get?
  * What do they lose?

* What motivates OpenStack to seek more projects?
  * What does OpenStack get?
  * What does OpenStack lose?
  * What gets more complicated when there are more projects?

* Why would a project choose to be "hosted on OpenStack
  infrastructure" instead of be an "OpenStack project"?

* Why should OpenStack be willing to host projects that are not
  "OpenStack projects"?

* When a project goes from the status of "OpenStack project" to
  "hosted on OpenStack infrastructure" (as currently being discussed
  with regard to Fuel) what is the project losing, what does the
  change signify and why should anyone care?

(I'm sure other people can come up with a few more questions.)

I think that if we're going to focus on this issue then we need to
make sure that we focus on marshalling the value and resources that
are required to support a project. That is: it has to be worth
everyone's time and enery to be and have (official) projects. It's
likely that this could mean that some projects are unable to be
(official) projects anymore.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 24

2017-06-13 Thread Chris Dent


No meeting this week, but some motion on a variety of proposals and
other changes. As usual, this document doesn't report on everything
going on with the Technical Committee. Instead it tries to focus on
those thing which I subjectively believe may have impact on
community members.

I will be taking some time off between now and the first week of
July so there won't be another of these until July 11th unless
someone else chooses to do one.

# New Things

No recently merged changes in policy, plans, or behavior. The office
hours announced in [last
weeks's](https://anticdent.org/tc-report-23.html) report are
happening and the associated IRC channel, `#openstack-tc` is gaining
members and increased chatter.

# Pending Stuff

## Queens Community Goals

Progress continues on the discussion surrounding community goals for
the Queens cycle. There are enough well defined goals that we'll
have to pick from amongst the several that are available to narrow
it down. I would guess that at some point in the not too distant future
there will be some kind of aggregated presentation to help us all
decide. I would guess that since I just said that, it will likely be me.

## Managing Binary Artifacts

With the addition of a requirement to include architecture in the
metadata associated with the artifact the [Guidelines for managing
releases of binary artifacts](https://review.openstack.org/#/c/469265/)
appears to be close to making everyone happy. This change will be
especially useful for those projects that want to produce containers.

## PostgreSQL

There was some difference of opinion on the next steps on
documenting the state of PostgreSQL, but just in the last couple of
hours today we seem to have reached some agreement to do only those
things on which everyone agrees. [Last
week's](https://anticdent.org/tc-report-23.html) report has a
summary of the discussion that was held in a meeting that week. Dirk
Mueller has taken on the probably not entirely pleasant task of
consolidating the feedback. His latest work can be found at [Declare
plainly the current state of PostgreSQL in
OpenStack](https://review.openstack.org/#/c/427880/). The briefest
of summaries of the difference of opinion is that for a while the
title of that review had "MySQL" where "PostgreSQL" is currently.

## Integrating Feedback on the 2019 TC Vision

The agreed next step on the [Draft technical committee vision for
public feedback](https://review.openstack.org/#/c/453262/) has been
to create a version which integrates the most unambiguous feedback
and edits the content to have more consistent tense, structure and
style. That's now in progress at [Begin integrating vision feedback
and editing for style](https://review.openstack.org/#/c/473620/).
The new version includes a few TODO markers for adding things like a
preamble that explains what's going on. As the document evolves
we'll be simultaneously discussing the ambiguous feedback and
determining what we can use and how that should change the document.

## Top 5 Help Wanted List

The vision document mentions a top ten hit list that will be used in
2019 to help orient contributors to stuff that matters. Here in 2017
the plan is to start smaller with a top 5 list of areas where new
individuals and organizations can make contributions that will have
immediate impact. The hope is that by having a concrete and highly
visible list of stuff that matters people will be encouraged to
participate in the most productive ways available. [Introduce Top 5
help wanted list](https://review.openstack.org/#/c/466684/) provides
the framework for the concept. Once that framework merges anyone is
empowered to propose an item for the list. That's the best part.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][placement] Allocating Complex Resources

2017-06-09 Thread Chris Dent

On Fri, 9 Jun 2017, Dan Smith wrote:


In other words, I would expect to be able to explain the purpose of the
scheduler as "applies nova-specific logic to the generic resources that
placement says are _valid_, with the goal of determining which one is
_best_".


This sounds great as an explanation. If we can reach this we done good.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][placement] Allocating Complex Resources

2017-06-09 Thread Chris Dent

On Fri, 9 Jun 2017, Jay Pipes wrote:


Sorry, been in a three-hour meeting. Comments inline...


Thanks for getting to this, it's very helpful to me.


* Part of the reason for having nested resource providers is because
  it can allow affinity/anti-affinity below the compute node (e.g.,
  workloads on the same host but different numa cells).


Mmm, kinda, yeah.


What I meant by this was that if it didn't matter which of more than
one nested rp was used, then it would be easier to simply consider
the group of them as members of an inventory (that came out a bit
more in one of the later questions).


* Does a claim made in the scheduler need to be complete? Is there
  value in making a partial claim from the scheduler that consumes a
  vcpu and some ram, and then in the resource tracker is corrected
  to consume a specific pci device, numa cell, gpu and/or fpga?
  Would this be better or worse than what we have now? Why?


Good question. I think the answer to this is probably pretty theoretical at 
this point. My gut instinct is that we should treat the consumption of 
resources in an atomic fashion, and that transactional nature of allocation 
will result in fewer race conditions and cleaner code. But, admittedly, this 
is just my gut reaction.


I suppose if we were more spread oriented than pack oriented, an
allocation of vcpu and ram would almost operate as a proxy for a
lock, allowing the later correcting allocation proposed above to be
somewhat safe because other near concurrent emplacements would be
happening on some other machine. But we don't have that reality.
I've always been in favor of making the allocation as early as
possible. I remember those halcyon days when we even thought it
might be possible to make a request and claim of resources in one
HTTP request.


  that makes it difficult or impossible for an allocation against a
  parent provider to be able to determine the correct child
  providers to which to cascade some of the allocation? (And by
  extension make the earlier scheduling decision.)


See above. The sorting/weighing logic, which is very much deployer-defined 
and wreaks of customization, is what would need to be added to the placement 
API.


And enough of that sorting/weighing logic is likely to do with child or
shared providers that it's not possible to constrain the weighing
and sorting to solely compute nodes? Not just whether the host is on
fire, but the share disk farm too?

Okay, thank you, that helps set the stage more clearly and leads
straight to my remaining big question, which is asked on the spec
you've proposed:

https://review.openstack.org/#/c/471927/

What are big strokes mechanisms for connecting the non-allocation
data in the response to GET /allocation_requests to the sorting
weighing logic? Answering on the spec works fine for me, I'm just
repeating it here in case people following along want the transition
over to the spec.

Thanks again.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement/resource providers update 26

2017-06-09 Thread Chris Dent
* https://review.openstack.org/#/q/project:openstack/osc-placement
 Work has started on an osc-plugin that can provide a command
 line interface to the placement API.
 It's quite likely that this code is going to need to be adopted by
 someone new.

* https://review.openstack.org/#/c/457636/
Devstack change to install that plugin.

* https://review.openstack.org/#/c/469037/
Cleanups for _schedule_instances()

* https://review.openstack.org/#/c/469047/
   Update placement.rst to link to more specs

* https://review.openstack.org/#/c/469048/
   Provide more information about installing placement

* https://review.openstack.org/#/c/468928/
   Disambiguate resource provider conflict message

* https://review.openstack.org/#/c/471067/
  Use util.extract_json in allocations handler

# End

\o/

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-wg/news

2017-06-08 Thread Chris Dent


Greetings OpenStack community,

Today's meeting was mostly devoted to two topics: Which of Monty's several 
patches were ready for freeze and some naming issues with raising the minimum 
microversion.

We decided three of Monty's patches are ready, they are listed below in the 
"Freeze" section.

The naming issue related to an optional field we want to add to the microversion discovery 
document. Some projects wish to be able to signal that they are intending to raise the minimum 
microversion at a point in the future. The name for the next minimum version is fairly clear: 
"next_min_version". What's less clear is the name which can be used for the field that 
states the earliest date at which this will happen. This cannot be a definitive date because 
different deployments will release the new code at different times. We can only say "it will 
be no earlier than this time".

Naming this field has proven difficult. The original was "not_before", but that has no 
association with "min_version" so is potentially confusing. However, people who know how 
to parse the doc will know what it means so it may not matter. As always, naming is hard, so we 
seek input from the community to help us find a suitable name. This is something we don't want to 
ever have to change, so it needs to be correct from the start. Candidates include:

* not_before
* not_raise_min_before
* min_raise_not_before
* earliest_min_raise_date
* min_version_eol_date
* next_min_version_effective_date

If you have an opinion on any of these, or a better suggestion please let us know, 
either on the review at <https://review.openstack.org/#/c/446138/>, or in 
response to this message.

# Newly Published Guidelines

Nothing new at this time.

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

* Add guideline about consuming endpoints from catalog
  https://review.openstack.org/#/c/462814/

* Add support for historical service type aliases
  https://review.openstack.org/#/c/460654/

* Describe the publication of service-types-authority data
  https://review.openstack.org/#/c/462815/

# Guidelines Currently Under Review [3]

* Microversions: add next_min_version field in version body
  https://review.openstack.org/#/c/446138/

* A suite of several documents about doing version discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API WG, please address your concerns in 
an email to the OpenStack developer mailing list[1] with the tag "[api]" in the 
subject. In your email, you should include any relevant reviews, links, and comments to 
help guide the discussion of the specific challenge you are facing.

To learn more about the API WG mission and the work we do, see OpenStack API 
Working Group [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] Start at https://review.openstack.org/#/c/462814/
[5] https://review.openstack.org/#/c/446138/


Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_wg/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 23

2017-06-06 Thread Chris Dent
ot; a transition to MySQL from
PostgreSQL might be. This work is already planned or in progress by
SUSE. I, and maybe a few others (not entirely clear), feel that
while this is useful work, including it in a resolution about the
current state of PostgreSQL support is at least irrelevant and at
worst effectively a statement of a desire to kill support for
PostgreSQL. Publishing such a statement, even if casually and
without intent, could signal that effort to improve the attention
PostgreSQL gets would be wasted effort.

Which leads to one of the philosophical concerns: Having even
limited support for PostgreSQL means that OpenStack is demonstrating
support for the idea that the database layer should be an
abstraction and which RDBMS (or RDBMS interface-alike) used in a
deployment is a deployers choice. For some this is a sign of quality
and maturity (somewhat like being able to choose which ever WSGI
server you feel is ideal for your situation). For others, not
choosing a specific RDBMS builds in limitations that will prevent
OpenStack from being able to scale and upgrade elegantly.

We were unable to agree on this point but at least some people felt
it a topic we need to address in order to be able to fully resolve
the PostgreSQL question. On the other side of the same coin: since
there is as yet no resolution on the merit of a strong database
abstraction layer it would be inappropriate to overstate the
OpenStack commitment to MySQL.

The next step is that dirk has been volunteered to integrate the
latest feedback on the [first
proposal](https://review.openstack.org/#/c/427880/). Once that is
done, we will iterate there. People have committed to keeping their
concerns and feedback focused around making the document be about
those things on which we agree.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][placement] Allocating Complex Resources

2017-06-06 Thread Chris Dent

On Mon, 5 Jun 2017, Ed Leafe wrote:


One proposal is to essentially use the same logic in placement
that was used to include that host in those matching the
requirements. In other words, when it tries to allocate the amount
of disk, it would determine that that host is in a shared storage
aggregate, and be smart enough to allocate against that provider.
This was referred to in our discussion as "Plan A".


What would help for me is greater explanation of if and if so, how and
why, "Plan A" doesn't work for nested resource providers.

We can declare that allocating for shared disk is fairly deterministic
if we assume that any given compute node is only associated with one
shared disk provider.

My understanding is this determinism is not the case with nested
resource providers because there's some fairly late in the game
choosing of which pci device or which numa cell is getting used.
The existing resource tracking doesn't have this problem because the
claim of those resources is made very late in the game. < Is this
correct?

The problem comes into play when we want to claim from the scheduler
(or conductor). Additional information is required to choose which
child providers to use. <- Is this correct?

Plan B overcomes the information deficit by including more
information in the response from placement (as straw-manned in the
etherpad [1]) allowing code in the filter scheduler to make accurate
claims. <- Is this correct?

For clarity and completeness in the discussion some questions for
which we have explicit answers would be useful. Some of these may
appear ignorant or obtuse and are mostly things we've been over
before. The goal is to draw out some clear statements in the present
day to be sure we are all talking about the same thing (or get us
there if not) modified for what we know now, compared to what we
knew a week or month ago.

* We already have the information the filter scheduler needs now by
  some other means, right?  What are the reasons we don't want to
  use that anymore?

* Part of the reason for having nested resource providers is because
  it can allow affinity/anti-affinity below the compute node (e.g.,
  workloads on the same host but different numa cells). If I
  remember correctly, the modelling and tracking of this kind of
  information in this way comes out of the time when we imagined the
  placement service would be doing considerably more filtering than
  is planned now. Plan B appears to be an acknowledgement of "on
  some of this stuff, we can't actually do anything but provide you
  some info, you need to decide". If that's the case, is the
  topological modelling on the placement DB side of things solely a
  convenient place to store information? If there were some other
  way to model that topology could things currently being considered
  for modelling as nested providers be instead simply modelled as
  inventories of a particular class of resource?
  (I'm not suggesting we do this, rather that the answer that says
  why we don't want to do this is useful for understanding the
  picture.)

* Does a claim made in the scheduler need to be complete? Is there
  value in making a partial claim from the scheduler that consumes a
  vcpu and some ram, and then in the resource tracker is corrected
  to consume a specific pci device, numa cell, gpu and/or fpga?
  Would this be better or worse than what we have now? Why?

* What is lacking in placement's representation of resource providers
  that makes it difficult or impossible for an allocation against a
  parent provider to be able to determine the correct child
  providers to which to cascade some of the allocation? (And by
  extension make the earlier scheduling decision.)

That's a start. With answers to at last some of these questions I
think the straw man in the etherpad can be more effectively
evaluated. As things stand right now it is a proposed solution
without a clear problem statement. I feel like we could do with a
more clear problem statement.

Thanks.

[1] https://etherpad.openstack.org/p/placement-allocations-straw-man

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] revised Postgresql deprecation patch for governance

2017-06-06 Thread Chris Dent


For people who have been following this topic, a reminder that later
today there will be a TC meeting dedicated to discussing this issues
captured by this thread[1], the related thread on active or passive
database approaches[2] and the two reviews about "what to do about
postgreSQL" [3][4].

It will be today (6 June) at 20.00 UTC.

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-May/116642.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117148.html
[3] https://review.openstack.org/#/c/427880/
[4] https://review.openstack.org/#/c/465589/

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement/resource providers update 25

2017-06-02 Thread Chris Dent



Placement update 25. Only 75 more to reach 100.

# What Matters Most

Claims against the placement API remain the highest priority. There's
plenty of other work in progress too which needs to advance. Lots of
links within.

# What's Changed

The entire shared resource providers stack has merged. This doesn't
mean we have support for them yet, but rather that it is possible to
express a query of the placement db that will include results
that are associated (via aggregates) with shared resource providers.

A new version of the os-traits library was required because the
routine it was using to walk modules could escape the local package,
leading to either brokenness or at least weirdness.

Work has begun on having project and user id information included in
allocations (see below).

Incremental progress across many other areas.

# Help Wanted

(This section _has_ changed since last time, removing some bug links
because the fixes have been started and are now linked below.)

Areas where volunteers are needed.

* General attention to bugs tagged placement:
   https://bugs.launchpad.net/nova/+bugs?field.tag=placement

* Helping to create api documentation for placement (see the Docs
   section below).

# Main Themes

## Claims in the Scheduler

Work is in progress on having the scheduler make resource claims.

  https://review.openstack.org/#/q/status:open+topic:bp/placement-claims

The current choice for how to do this is to pass instance uuids as a
separate parameter in the RPC call to select_destinations. This
information is required to be able to make the claims/allocations
(which are identified by consumer uuid).

## Traits

The main API is in place. Debate raged on how best to manage updates
of standard os-traits. Eventually a simple sync done once per
process seemed like the way to go, without having a cache:

https://review.openstack.org/#/c/469578/

This needs to address some concurrency issues.

There's also a small cleanup to the os-traits library:

https://review.openstack.org/#/c/469631/

## Shared Resource Providers

The stack that makes the database side of things start to work has
merged:


https://review.openstack.org/#/q/status:merged+topic:bp/shared-resources-pike

This will allow work on the API and resource-tracker/scheduler side
to move along.

## Docs

Lots of placement-related api docs in progress on a few different
topics:

* https://review.openstack.org/#/q/status:open+topic:cd/placement-api-ref
* 
https://review.openstack.org/#/q/status:open+topic:placement-api-ref-add-resource-classes-put
* https://review.openstack.org/#/q/status:open+topic:bp/placement-api-ref

We should a) probably get that stuff on the same topic, b) make sure
work is not being duplicated.

## Nested Resource Providers

Work has resumed on nested resource providers.


https://review.openstack.org/#/q/status:open+topic:bp/nested-resource-providers

Currently having some good review discussion on data structures and
graph traversal and search. It's a bit like being back in school.

## User and Project IDs in Allocations

This will allow placement allocations to be considered when doing
resource accounting for things like quota. User id and project id
information is added to allocation records and a new API resource is
added to be able to get summaries of usage by user or project.

https://review.openstack.org/#/q/topic:bp/placement-project-user

# Other Code/Specs

* https://review.openstack.org/#/c/460147/
   Use DELETE inventories method in report client.

* https://review.openstack.org/#/c/427200/
Add a status check for legacy filters in nova-status.

* https://review.openstack.org/#/c/454426/
Handle new hosts for updating instance info in scheduler
Currently in merge conflict.

* https://review.openstack.org/#/c/453916/
Don't send instance updates from compute if not using filter
scheduler

* https://review.openstack.org/#/q/project:openstack/osc-placement
Work has started on an osc-plugin that can provide a command
line interface to the placement API.
It's quite likely that this code is going to need to be adopted by
someone new.

* https://review.openstack.org/#/c/457636/
   Devstack change to install that plugin. This has two +2, but no
   +W.

* https://review.openstack.org/#/c/469037/
   Cleanups for _schedule_instances()

* https://review.openstack.org/#/c/469047/
  Update placement.rst to link to more specs

* https://review.openstack.org/#/c/469048/
  Provide more information about installing placement

* https://review.openstack.org/#/c/468928/
  Disambiguate resource provider conflict message

* https://review.openstack.org/#/c/468923/
  Adjust resource provider links by microversion

# End

I was unable to go digging for things as much as usual this week due
to other business. If I've missed something, my apologies, please
add it to the thread in a followup.

Your prize is some cornish clotted cream.

--
Chris Dent

Re: [openstack-dev] [qa] [tc] [all] more tempest plugins (was Re: [tc] [all] TC Report 22)

2017-06-02 Thread Chris Dent

On Thu, 1 Jun 2017, Matthew Treinish wrote:


On Thu, Jun 01, 2017 at 11:09:56AM +0100, Chris Dent wrote:

A lot of this results, in part, from there being no single guiding
pattern and principle for how (and where) the tests are to be
managed.


It sounds like you want to write a general testing guide for openstack.
Have you started this effort anywhere? I don't think anyone would be opposed
to starting a document for that, it seems like a reasonable thing to have.
But, I think you'll find there is not a one size fits all solution though,
because every project has their own requirements and needs for testing.


No, I haven't made any decisions about what ought to happen. I'm
still trying to figure out if there is a problem, a suite of
problems, or everything is great. Knowing what the problems are
tends to be a reasonable thing to do before proposing or
implementing solutions, especially if we want those solutions to be
most correct.


So have you read the documentation:

https://docs.openstack.org/developer/tempest/ (or any of the other relevant
documentation

and filed bugs about where you think there are gaps? This is something that
really bugs me sometimes (yes the pun is intended) just like anything else this
is all about iterative improvements. These broad trends are things tempest
and (every project hopefully) have been working on. But improvements don't
just magically occur overnight it takes time to implement them.


This is a huge part of the colllaboration issues I was identifying
in my previous message. Somebody says "there seems to be some
confusion here" and somebody else comes along and asks "have you
filed bugs?" or "have you proposed a solution?".

Well, "no" because like I said above I don't know what (or even _if_)
there's something to fix or the relevant foundations of the confusion.

I have some suspicions or concerns that the implicit hierarchy of
some tempests tests being in plugins and some not creates issues
with discovery, management and identification of responsible parties
and _may_ imply a lack of "level playing field".

But:

* if other people don't have those concerns it's not worth
  pursuing
* until we reach some kind of shared understanding and agreement
  about the concerns, speculating about solutions is premature


Just compare the state of the documentation and tooling from 2 years ago (when
tempest started adding the plugin interface) to today. Things have steadily
improved over time and the situation now is much better. This will continue and
in the future things will get even better.


Yes, it's great. If you feel like I was suggesting otherwise, then
my apologies for not being clear. As a general rule tempest and
other QA tools have consistently done great work in terms of
documentation and tooling. That there are plugins at all is
fantastic; that we are having discussions about how to make the most
effective and fair use of them is a sign that they work.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [tc] [all] more tempest plugins (was Re: [tc] [all] TC Report 22)

2017-06-01 Thread Chris Dent

On Wed, 31 May 2017, Doug Hellmann wrote:

Yeah, it sounds like the current organization of the repo is not
ideal in terms of equal playing field for all of our project teams.
I would be fine with all of the interop tests being in a plugin
together, or of saying that the tempest repo should only contain
those tests and that others should move to their own plugins. If we're
going to reorganize all of that, we should decide what new structure we
want and work it into the goal.


I feel like the discussion about the interop tests has strayed this
conversation from the more general point about plugin "fairness" and
allowed the vagueness in plans for interop to control our thinking
and discussion about options in the bigger view.


This is pretty standard for an OpenStack conversation:

* introduce a general idea or critique
* someone latches on to one small aspect of that idea that presents
  some challenges, narrowing the context
* that latching and those challenges is used to kill the introspection
  that the general idea was pursuing, effectively killing any
  opportunities for learning and discovery that could lead to
  improvement or innovation

This _has_ to stop. We're at my three year anniversary in the
community and this has been and still is my main concern with the
how we collaborate. There is so much stop energy and chilling effect
in the way we discuss things in OpenStack. So much fatigue over
issues being brought up "over and over again" or things being
discussed without immediate solutions in mind. So what! Time moves
forward which means the context for issues is always changing.
Discussion is how we identify problems! Discussion is how we
get good solutions! 



It's clear from this thread and other conversations that the
management of tempest plugins is creating a multiplicity of issues
and confusions:

* Some projects are required to use plugins and some are not. This
  creates classes of projects.

* Those second class projects now need to move their plugins to
  other repos because rules.

* Projects with plugins need to put their tests in their new repos,
  except for some special tests which will be identified by a vague
  process.

* Review of changes is intermittent and hard to track because
  stakeholders need to think about multiple locations, without
  guidance.

* People who want to do validation with tempest need to gather stuff
  from a variety of locations.

* Tempest is a hammer used for lots of different nails, but the
  type of nail varies over time and with the whimsy of policy.

* Discussion of using something other than tempest for interop is
  curtailed by policy which appears to be based in "that's the way
  it is done".

A lot of this results, in part, from there being no single guiding
pattern and principle for how (and where) the tests are to be
managed. When there's a choice between one, some and all, "some" is
almost always the wrong way to manage something. "some" is how we do
tempest (and fair few other OpenStack things).

If it is the case that we want some projects to not put their tests
in the main tempest repo then the only conceivable pattern from a
memorability, discoverability, and equality standpoint is actually
for all the tests to be in plugins.

If that isn't possible (and it is clear there are many reasons why
that may be the case) then we need to be extra sure that we explore
and uncover the issues that the "some" approach presents and provide
sufficient documentation, tooling, and guidance to help people get
around them. And that we recognize and acknowledge the impact it has.

If the answer to that is "who is going to do that?" or "who has the
time?" then I ask you to ask yourself why we think the "non-core"
projects have time to fiddle about with tempest plugins?

And finally, I actually don't have too strong of a position in the
case of tempest and tempest plugins. What I take issue with is the
process whereby we discuss and decide these things and characterize
the various projects.

If I have any position on tempest at all it is that we should limit
it to gross cloud validation and maybe interop testing, and projects
should manage their own integration testing in tree using whatever
tooling they feel is most appropriate. If that turns out to be
tempest, cool.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] [tc] [all] more tempest plugins (was Re: [tc] [all] TC Report 22)

2017-05-31 Thread Chris Dent

On Wed, 31 May 2017, Graham Hayes wrote:

On 30/05/17 19:09, Doug Hellmann wrote:

Excerpts from Chris Dent's message of 2017-05-30 18:16:25 +0100:

Note that this goal only applies to tempest _plugins_. Projects
which have their tests in the core of tempest have nothing to do. I
wonder if it wouldn't be more fair for all projects to use plugins
for their tempest tests?


All projects may have plugins, but all projects with tests used by
the Interop WG (formerly DefCore) for trademark certification must
place at least those tests in the tempest repo, to be managed by
the QA team [1]. As new projects are added to those trademark
programs, the tests are supposed to move to the central repo to
ensure the additional review criteria are applied properly.


Thanks for the clarification, Doug. I don't think it changes the
main thrust of what I was trying to say (more below).


[1] 
https://governance.openstack.org/tc/resolutions/20160504-defcore-test-location.html


In the InterOp discussions in Boston, it was indicated that some people
on the QA team were not comfortable with "non core" project (even in
the InterOp program) having tests in core tempest.

I do think that may be a bigger discussion though.


I'm not suggesting we change everything (because that would take a
lot of time and energy we probably don't have), but I had some
thoughts in reaction to this and sharing is caring:

The way in which the tempest _repo_ is a combination of smoke,
integration, validation and trademark enforcement testing is very
confusing to me. If we then lay on top of that the concept of "core"
and "not core" with regard to who is supposed to put their tests in
a plugin and who isn't (except when it is trademark related!) it all
gets quite bewildering.

The resolution above says: "the OpenStack community will benefit
from having the interoperability tests used by DefCore in a central
location". Findability is a good goal so this a reasonable
assertion, but then the directive to lump those tests in with a
bunch of other stuff seems off if the goal is to "easier to read and
understand a set of tests".

If, instead, Tempest is a framework and all tests are in plugins
that each have their own repo then it is much easier to look for a
repo (if there is a common pattern) and know "these are the interop
tests for openstack" and "these are the integration tests for nova"
and even "these are the integration tests for the thing we are
currently describing as 'core'[1]".

An area where this probably falls down is with validation. How do
you know which plugins to assemble in order to validate this cloud
you've just built? Except that we already have this problem now that
we are requiring most projects to manage their tempest tests as
plugins. Does it become worse by everything being a plugin?

[1] We really need a better name for this.
--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 22

2017-05-30 Thread Chris Dent
t of
progress but there's more work to do. If you have new feedback,
please add it to the review.

[^9]: 
<https://docs.google.com/spreadsheets/d/1YzHPP2EQh2DZWGTj_VbhwhtsDQebAgqldyi1MHm6QpE>
[^10]: <https://review.openstack.org/#/c/453262/>

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement/resource providers update 24

2017-05-26 Thread Chris Dent
an provide a command
   line interface to the placement API.
   It's quite likely that this code is going to need to be adopted by
   someone new.

* https://review.openstack.org/#/c/457636/
  Devstack change to install that plugin. This has two +2, but no
  +W.

* https://review.openstack.org/#/c/460147/
  Use DELETE inventories method in report client.

* https://review.openstack.org/#/c/460231/
   Use a specific error message for inventory in use, not just the db
   exception.

* https://review.openstack.org/#/c/458049/
   Add a test to ensure that placement microversions have no gaps
   when there is more than one handler for a URL. This ought to be a
   quick and easy merge.

* https://review.openstack.org/#/c/427200/
   Add a status check for legacy filters in nova-status.

* https://review.openstack.org/#/c/454426/
   Handle new hosts for updating instance info in scheduler
   Currently in merge conflict.

* https://review.openstack.org/#/c/453916/
   Don't send instance updates from compute if not using filter
   scheduler

# End

Your reward is a brief moment of respite from the despair of existence.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 21

2017-05-23 Thread Chris Dent


With the advent Thierry's weekly status reports[^1] on the proposals
currently under review by the TC and the optionality of the weekly
TC meetings, this report becomes less about meeting minutes and more
about reporting on the things that crossed my TC radar that seemed
important and/or that seemed like they could do with more input.

This week has no TC meeting. The plan is that discussion will occur
either asynchronously in mailing list threads on the "opentack-dev"
list or in gerrit reviews in the governance project[^2] or for
casual chats use IRC and the #openstack-dev channel[^3].

[^1]: <http://lists.openstack.org/pipermail/openstack-dev/2017-May/117047.html>
[^2]: 
<https://review.openstack.org/#/q/project:openstack/governance+status:open>
[^3]: The concept of office hours is being introduced: 
<https://review.openstack.org/#/c/467256/>

# Pending Stuff

## The need to talk about postgreSQL

There's ongoing discussion about how to deal with the position of
postgreSQL in the attention of the community. There are deployments
that use it and the documentation mentions it, but the attention of
most developers and all tests is not upon it. It is upon MySQL (and
its variants) instead.

There's agreement that this needs to be dealt with, but the degree
of change is debated, if not hotly then at least verbosely. An
initial review was posted proposing we clear up the document and
indicate a path forward that recognized an existing MySQL
orientation:


<https://review.openstack.org/#/c/427880/>


I felt this was too wordy, too MySQL oriented, and left out an
important step: agitate with the board. It was easier to explain
this in an alternative version resulting in:


<https://review.openstack.org/#/c/465589/>


Meanwhile discussion had begun (and still continues) in an email
thread:


<http://lists.openstack.org/pipermail/openstack-dev/2017-May/116642.html>


Observing all this, Monty noticed that there is a philosophical
chasm that must be bridged before we can truly resolve this issue,
so he started yet another thread:


<http://lists.openstack.org/pipermail/openstack-dev/2017-May/117148.html>


The outcome of that thread and these resolutions is likely to have a
fairly significant impact on how we think about managing dependent
services in OpenStack. There's a lot to digest behind those links
but on the scale of "stuff the TC is doing that will have impact"
this is probably one of them.

## Draft Vision for the TC

The draft vision for the TC[^4] got feedback on the review, via
survey[^5] and at the forum[^6]. Effort is now in progress to
incorporate that feedback and create something that is easier to
comprehend and will make the actual vision more clear. One common
bit of feedback was that the document needs a preamble and other
structural cues so that people get what it is trying to do.
johnthetubaguy, dtroyer and I (cdent) are on the hook for doing this
next phase of work. Feel free to contact one of us (or leave a
comment on the review, or send some email) if you feel like you have
something to add.

[^4]: <https://review.openstack.org/#/c/453262/>
[^5]: 
<https://docs.google.com/spreadsheets/d/1YzHPP2EQh2DZWGTj_VbhwhtsDQebAgqldyi1MHm6QpE>
[^6]:
<https://www.openstack.org/videos/boston-2017/the-openstack-technical-committee-vision-for-2019-updates-stories-and-q-and-a>

# Dropped Stuff

_A section with reminders of things that were happening or were
going to happen then either stopped without resolution or never
started in the first place._

## OpenStack moving too fast and too slow

A thread was started on this[^7]. It got huge. While there were many
subtopics, one of the larger ones was the desire for there to be a
long term support release. There were a few different reactions to
this, inaccurately paraphrased as:

* That we have any stable releases at all in the upstream is pretty
  amazing, some global projects don't bother, it's usually a
  downstream problem.
* Great idea, please provide some of the resources required to make
  it happen, the OpenStack community is not an unlimited supply of
  free labor.

Then summit happened, people moved on to other things and there
wasn't much in the way of resolution. Is there anything we could or
should be doing here?

If having LTS is that much of a big deal, then it is something which
the Foundation Board of Directors must be convinced is a priority.
Early in this process I had suggested we at least write a resolution
that repeats (in nicer form) the second bullet point above. We could
do that. There's also a new plan to create a top 5 help wanted
list[^8]. Doing LTS is probably too big for that, but "stable branch
reviews" is not.

[^7]: <http://lists.openstack.org/pipermail/openstack-dev/2017-May/116298.html>
[^8]: <https://review.openstack.org/#/c/466684/>

--
Chris Dent 

Re: [openstack-dev] [tc] Active or passive role with our database layer

2017-05-23 Thread Chris Dent

On Tue, 23 May 2017, Jay Pipes wrote:

Err, in my experience, having a *completely* dumb persistence layer -- i.e. 
one that tries to assuage the differences between, say, relational and 
non-relational stores -- is a recipe for disaster. The developer just ends up 
writing join constructs in that business layer instead of using a relational 
data store the way it is intended to be used. Same for aggregate operations. 
[1]


Now, if what you're referring to is "don't use vendor-specific extensions in 
your persistence layer", then yes, I agree with you.


If you've commited to doing an RDBMS then, yeah, stick with
relational, but dumb relational. Since that's where we are [3] in
OpenStack, then we should go with that.

[3] Of course sometimes I'm sad that we made that commitment and
instead we had an abstract storage interface, an implementation
of which was stupid text files on disk, another which was generic
sqlalchemy, and another which was raw SQL extracted wholesale from
the mind of jaypipes, optimized for Drizzle 8.x. But then I'm often
sad about completely unrealistic things.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] uWSGI help for Congress

2017-05-23 Thread Chris Dent

On Mon, 22 May 2017, Eric K wrote:


If someone out there knows uWSGI and has a couple spare cycles to help
Congress project, we'd super appreciate it.

The regular contributors to Congress don't have experience with uWSGI and
could definitely use some help getting started with this goal. Thanks a ton!


Is the issue that you need get WSGI working at all (that is, need to
create a WSGI app for running the api service), or existing WSGI
tooling, made to work with mod_wsgi, needs to be adapted to work
with uwsgi? In either case, if you're able to point me at existing
api service code I might be able to provide some pointers.

In the meantime some potentially useful links:

* some notes I took on switching nova and devstack over to uwsg:

https://etherpad.openstack.org/p/devstack-uwsgi

* devstack code for nova+uwsgi

https://review.openstack.org/#/c/457715/

* rewrite of nova's wsgi application to start up properly

https://review.openstack.org/#/c/457283/

This last one might be most useful as it looks like congress is
using an api startup model (for the non-WSGI case) similar to
nova's.


--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Active or passive role with our database layer

2017-05-23 Thread Chris Dent

On Tue, 23 May 2017, Sean Dague wrote:


Do you have an example of an Open Source project that (after it was
widely deployed) replaced their core storage engine for their existing
users?


That's not the point here. The point is that new deployments may
choose to use a different one and old ones can choose to change if
they like (but don't have to) if storage is abstracted.

The notion of a "core storage engine" is not something that I see as
currently existing in OpenStack. It is clear it is something that at
least you and likely several other people would like to see.

But it is most definitely not something we have now and as I
responded to Monty, getting there from where we are now would be a
huge undertaking with as yet unproven value [1].


I do get that when building more targeted things, this might be a value,
but I don't see that as a useful design constraint for OpenStack.


Completely the opposite from my point of view. When something is as
frameworky as OpenStack is (perhaps accidently and probably
unfortunately) then _of course_ replaceable DBs are the norm,
expected, useful and potentially required to satisfy more use cases.

Adding specialization (tier 1?) is probably something we want and
want to encourage but it is not something we should build into the
"core" of the "product".

But there's that philosophical disagreement again. I'm not sure we
can resolve that. What I'm hoping is that by starting the ball
rolling other people will join in and people like you and me can
step out of the way.

[1] Of the issues described elsewhere in the thread the only one
which seems to be a bit sticking point is the trigger thing, and
there's significant disagreement on that being "okay".

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] revised Postgresql deprecation patch for governance

2017-05-23 Thread Chris Dent

On Mon, 22 May 2017, Sean Dague wrote:


This feels like what a Tier 2 support looks like. A basic SQLA and pray
so that if you live behind SQLA you are probably fine (though not
tested), and then test and advanced feature roll out on a single
platform. Any of that work might port to other platforms over time, but
we don't want to make that table stakes for enhancements.


I've often wondered why what's being called "Tier 1" (advancec
features) here isn't something done downstream of "generic"
OpenStack.

Which is not to say it would have to be closed source or vendor
oriented. Simply not here. It may be we've got enough to deal with
here.

The 'external' model described by Monty makes things that are not
here easier to manage (but, to be fair, not necessarily easier to
make).

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Active or passive role with our database layer

2017-05-23 Thread Chris Dent
ithout trying that hard
and great (but also various) results with a bit of effort.

So that means it ought to be possible to do enough OpenStack to
think it is cool with whatever database I happen to have handy. And
then once I dig it I should be able to manage it effectively using
the solutions that are best for my environment.

Finally, this is focused on the database layer but similar questions arise in 
other places. What is our philosophy on prescriptive/active choices on our 
part coupled with automated action and ease of operation vs. expanded choices 
for the deployer at the expense of configuration and operational complexity. 
For now let's see if we can answer it for databases, and see where that gets 
us.


I continue to think that this issue is somewhat special at the
persistence layer because of the balance of who it impacts the most:
the deployers, developers, and distributors more than the users[2].
Making global conclusions about external and active based on this
issue may be premature.


Thanks for reading.


Thanks for writing. You've done a lot of writing lately. Is good.

[1] 
http://lists.openstack.org/pipermail/openstack-operators/2017-May/013464.html

[2] That our database choices impacts the users (e.g., the case and encoding
things at the API layer) is simply a mistake that we all made together, a
bug to be fixed, not an architectural artifact.
--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Keystone] Cockroachdb for Keystone Multi-master

2017-05-19 Thread Chris Dent

On Fri, 19 May 2017, Adrian Turjak wrote:

On 19 May 2017 11:43 am, Curtis <serverasc...@gmail.com> wrote:
  I had thought that the OpenStack community was deprecating Postgres
  support though, so that could make things a bit harder here (I might
  be wrong about this).

I really hope not, because that will take Cockroachdb off the table entirely 
(unless they add MySQL support) and it may prove to be a
great option overall once it is known to be stable and has been tested in 
larger scale setups.

I remember reading about the possibility of deprecating Postgres but there are 
people using it in production so I assumed we didn't go
down that path. Would be good to have someone confirm.


Deprecating postgreSQL is not a done deal, it's up for review at
[1] and [2]. And at this point it is more about documenting reality
that postgreSQL is not a focus of upstream development.

Deprecation is likely to happen, however, if there isn't an increase
in the number people willing to:

* actively share pg knowledge in the OpenStack community
* help with ensuring there is gate testing and responsiveness to
  failures
* address some of the mysql-oriented issue listed in [1]

I'd rather not see it happen, especially if it allows an easy step
to using cockroachdb. So I'd encourage you (and anyone else) to
participate in those reviews, especially if they are able to make
some commitments about future involvement.

[1] https://review.openstack.org/#/c/427880/
[2] https://review.openstack.org/#/c/465589/

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement/resource providers update 23

2017-05-19 Thread Chris Dent
ting until all of that settles:


https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/nested-resource-providers

## Ironic/Custom Resource Classes

(This section has not changed since last week)

There's a blueprint for "custom resource classes in flavors" that
describes the stuff that will actually make use of custom resource
classes:

   
https://blueprints.launchpad.net/nova/+spec/custom-resource-classes-in-flavors

The spec has merged, but the implementation has not yet started.

Over in Ironic some functional and integration tests have started:

   https://review.openstack.org/#/c/443628/

There's also a spec in progress discussing ways to filter baremetal
nodes by tenant/project:

   https://review.openstack.org/#/c/415512/

# Other Code/Specs

* https://review.openstack.org/#/q/project:openstack/osc-placement
  Work has started on an osc-plugin that can provide a command
  line interface to the placement API.
  It's quite likely that this code is going to need to be adopted by
  someone new.

* https://review.openstack.org/#/c/457636/
  Devstack change to install that plugin

* 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:get-inventory
  Clean up the interface for getting inventory information from
  virt drivers.

* https://review.openstack.org/#/c/460147/
  Use DELETE inventories method in report client.  This is proving
  somewhat more complicated than initially expected. DELETE of
  inventories doesn't give us a new generation for the associated
  resource provider.

* https://review.openstack.org/#/c/460231/
  Use a specific error message for inventory in use, not just the db
  exception. Jay has identified that this too is somewhat more
  complicated than initially expected, because for the time being
  the message is being parsed client-side.

* https://review.openstack.org/#/c/458049/
  Add a test to ensure that placement microversions have no gaps
  when there is more than one handler for a URL. This ought to be a
  quick and easy merge.

* https://review.openstack.org/#/c/427200/
  Add a status check for legacy filters in nova-status.

* https://review.openstack.org/#/c/461494/
  Start the removal of the can_host column from the resource
  providers database table. We're no longer going to use it.  A
  trait will indicate shared providers.

* https://review.openstack.org/#/c/454426/
  Handle new hosts for updating instance info in scheduler
  Currently in merge conflict.

* https://review.openstack.org/#/c/453916/
  Don't send instance updates from compute if not using filter
  scheduler

I've left a few items off the list that have not received attention
for a few months. If they are truly important they will rise back up
again like zombies from the grave.

# End

Your reward is zombie protection spray.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Boston Forum session recap - claims in the scheduler (or conductor)

2017-05-19 Thread Chris Dent

On Thu, 18 May 2017, Matt Riedemann wrote:

We didn't really get into this during the forum session, but there are 
different opinions within the nova dev team on how to do claims in the 
controller services (conductor vs scheduler). Sylvain Bauza has a series 
which uses the conductor service, and Ed Leafe has a series using the 
scheduler. More on that in the mailing list [3].


Since we've got multiple threads going on this topic, I put some
of my concerns in a comment on one of Ed's reviews:

https://review.openstack.org/#/c/465171/3//COMMIT_MSG@30

It's a bit left fieldy but tries to ask about some of the long term
concerns we may need to be thinking about here, with regard to other
services using placement and maybe them needing a
scheduler-like-thing too (because placement cannot do everything).

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is the pendulum swinging on PaaS layers?

2017-05-19 Thread Chris Dent

On Fri, 19 May 2017, Duncan Thomas wrote:


On 19 May 2017 at 12:24, Sean Dague <s...@dague.net> wrote:


I do get the concerns of extra logic in Nova, but the decision to break
up the working compute with network and storage problem space across 3
services and APIs doesn't mean we shouldn't still make it easy to
express some pretty basic and common intents.


Given that we've similar needs for retries and race avoidance in and
between glance, nova, cinder and neutron, and a need or orchestrate
between at least these three (arguably other infrastructure projects
too, I'm not trying to get into specifics), maybe the answer is to put
that logic in a new service, that talks to those four, and provides a
nice simple API, while allowing the cinder, nova etc APIs to remove
things like internal retries?


This is what enamel was going to be, but we got stalled out because
of lack of resources and the usual raft of other commitments:

https://github.com/jaypipes/enamel

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-wg/news

2017-05-18 Thread Chris Dent


Greetings OpenStack community,

A short meeting today, mostly reflecting on the Birds of a Feather session [4] 
at Summit last week. It was well attended and engendered plenty of good 
discussion. There are notes on an etherpad at 
https://etherpad.openstack.org/p/BOS-API-WG-BOF that continue to be digested. 
One of the main takeaways was the group should work with people creating 
documentation (api-ref and otherwise) to encourage linking from those documents 
to the guidelines [2]. This will help to explain why some things are the way 
they are (for example microversions) and also highlight a path whereby people 
can contribute to improving or clarifying the guidelines.

Working on that linking will be an ongoing effort. In the meantime the primary 
action for the group (and anyone else interested in API consistency) is to 
review Monty's efforts to document client side interactions with the service 
catalog and version discovery (linked below).

# Newly Published Guidelines

Nothing new at this time.

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

None at this time but please check out the reviews below.

# Guidelines Currently Under Review [3]

* Microversions: add next_min_version field in version body
  https://review.openstack.org/#/c/446138/

* A suite of several documents about using the service catalog and doing 
version discovery
  Start at https://review.openstack.org/#/c/462814/

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API WG, please address your concerns in 
an email to the OpenStack developer mailing list[1] with the tag "[api]" in the 
subject. In your email, you should include any relevant reviews, links, and comments to 
help guide the discussion of the specific challenge you are facing.

To learn more about the API WG mission and the work we do, see OpenStack API 
Working Group [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18679/api-working-group-update-and-bof

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_wg/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 20

2017-05-17 Thread Chris Dent

On Tue, 16 May 2017, Chris Dent wrote:


Elsewhere I'll create a more complete write up of the Foundation board
meeting that happened on the Sunday before summit, but some comments
from that feel relevant to the purpose of these reports:


Here's the rough notes from the board meeting:

https://anticdent.org/openstack-pike-board-meeting-notes.html

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon][api][docs] Feedback requested on proposed formatting change to API docs

2017-05-17 Thread Chris Dent

On Tue, 16 May 2017, Monty Taylor wrote:


The questions:

- Does this help, hurt, no difference?
- servers[].name - servers is a list, containing objects with a name field. 
Good or bad?
- servers[].addresses.$network-name - addresses is an object and the keys of 
the object are the name of the network in question.


I sympathize with the motivation, but for me these don't help: they
add noise (more symbols) and require me to understand yet more syntax.

This is probably because I tend to look at the representations of the
request or response, see a key name and wonder "what is this?" and
then look for it in the table, not the other way round. Thus I want
the key name to be visually greppable without extra goo.

I suspect, however, that I'm not representative of the important
audience and feedback from people who are "real users" should be
prioritized way higher than mine.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][infra][release][security][stable][kolla][loci][tripleo][docker][kubernetes] do we want to be publishing binary container images?

2017-05-17 Thread Chris Dent

On Wed, 17 May 2017, Thierry Carrez wrote:


Back to container image world, if we refresh those images daily and they
are not versioned or archived (basically you can only use the latest and
can't really access past dailies), I think we'd be in a similar situation ?


Yes, this.

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


<    1   2   3   4   5   6   7   8   >