[openstack-dev] [nova] Rocky RC time regression analysis

2018-10-05 Thread melanie witt

Hey everyone,

During our Rocky retrospective discussion at the PTG [1], we talked 
about the spec freeze deadline (milestone 2, historically it had been 
milestone 1) and whether or not it was related to the hectic 
late-breaking regression RC time we had last cycle. I had an action item 
to go through the list of RC time bugs [2] and dig into each one, 
examining: when the patch that introduced the bug landed vs when the bug 
was reported, why it wasn't caught sooner, and report back so we can 
take a look together and determine whether they were related to the spec 
freeze deadline.


I used this etherpad to make notes [3], which I will [mostly] copy-paste 
here. These are all after RC1 and I'll paste them in chronological order 
of when the bug was reported.


Milestone 1 r-1 was 2018-04-19.
Spec freeze was milestone 2 r-2 was 2018-06-07.
Feature freeze (FF) was on 2018-07-26.
RC1 was on 2018-08-09.

1) Broken live migration bandwidth minimum => maximum based on neutron 
event https://bugs.launchpad.net/nova/+bug/1786346


- Bug was reported on 2018-08-09, the day of RC1
- The patch that caused the regression landed on 2018-03-30 
https://review.openstack.org/497457

- Unrelated to a blueprint, the regression was part of a bug fix
- Was found because prometheanfire was doing live migrations and noticed 
they seemed to be stuck at 1MiB/s for linuxbridge VMs

- The bug was due to a race, so the gate didn't hit it
- Comment on the regression bug from dansmith: "The few hacked up gate 
jobs we used to test this feature at merge time likely didn't notice the 
race because the migrations finished before the potential timeout and/or 
are on systems so loaded that the neutron event came late enough for us 
to win the race repeatedly."


2) Docs for the zvm driver missing

- All zvm driver code changes were merged by 2018-07-17 but the 
documentation was overlooked but was noticed near RC time

- Blueprint was approved on 2018-02-12

3) Volume status remains "detaching" after a failure to detach a volume 
due to DeviceDetachFailed https://bugs.launchpad.net/nova/+bug/1786318


- Bug was reported on 2018-08-09, the day of RC1
- The change that introduced the regression landed on 2018-02-21 
https://review.openstack.org/546423

- Unrelated to a blueprint, the regression was part of a bug fix
- Question: why wasn't this caught earlier?
- Answer: Unit tests were not asserting the call to the roll_detaching 
volume API. Coverage has since been added along with the bug fix 
https://review.openstack.org/590439


4) OVB overcloud deploy fails on nova placement errors 
https://bugs.launchpad.net/nova/+bug/1787910


- Bug was reported on 2018-08-20
- Change that caused the regression landed on 2018-07-26, FF day 
https://review.openstack.org/517921

- Blueprint was approved on 2018-05-16
- Was found because of a failure in the 
legacy-periodic-tripleo-ci-centos-7-ovb-3ctlr_1comp-featureset001-master 
CI job. The ironic-inspector CI upstream also failed because of this, as 
noted by dtantsur.
- Question: why did it take nearly a month for the failure to be 
noticed? Is there any way we can cover this in our 
ironic-tempest-dsvm-ipa-wholedisk-bios-agent_ipmitool-tinyipa job?


5) when live migration fails due to a internal error rollback is not 
handled correctly https://bugs.launchpad.net/nova/+bug/1788014


- Bug was reported on 2018-08-20
- The change that caused the regression landed on 2018-07-26, FF day 
https://review.openstack.org/434870

- Unrelated to a blueprint, the regression was part of a bug fix
- Was found because sean-k-mooney was doing live migrations and found 
that when a LM failed because of a QEMU internal error, the VM remained 
ACTIVE but the VM no longer had network connectivity.

- Question: why wasn't this caught earlier?
- Answer: We would need a live migration job scenario that intentionally 
initiates and fails a live migration, then verify network connectivity 
after the rollback occurs.

- Question: can we add something like that?

6) nova-manage db online_data_migrations hangs on instances with no host 
set https://bugs.launchpad.net/nova/+bug/1788115


- Bug was reported on 2018-08-21
- The patch that introduced the bug landed on 2018-05-30 
https://review.openstack.org/567878

- Unrelated to a blueprint, the regression was part of a bug fix
- Question: why wasn't this caught earlier?
- Answer: To hit the bug, you had to have had instances with no host set 
(that failed to schedule) in your database during an upgrade. This does 
not happen during the grenade job
- Question: could we add anything to the grenade job that would leave 
some instances with no host set to cover cases like this?


7) release notes erroneously say that nova-consoleauth doesn't have to 
run in Rocky https://bugs.launchpad.net/nova/+bug/1788470


- Bug was reported on 2018-08-22
- The patches that conveyed the wrong information for the docs landed on 
2018-05-07 https://review.openstack.org/565367

- Blueprint was 

Re: [openstack-dev] [tc] bringing back formal TC meetings

2018-10-05 Thread Jean-Philippe Evrard
On Fri, 2018-10-05 at 07:40 -0400, Doug Hellmann wrote:
> Chris Dent  writes:
> 
> > On Thu, 4 Oct 2018, Doug Hellmann wrote:
> > 
> > > TC members, please reply to this thread and indicate if you would
> > > find
> > > meeting at 1300 UTC on the first Thursday of every month
> > > acceptable, and
> > > of course include any other comments you might have (including
> > > alternate
> > > times).
> > 
> > +1
> > 
> > Also, if we're going to set aside a time for a semi-formal meeting,
> > I
> > hope we will have some form of agenda and minutes, with a fairly
> > clear process for setting that agenda as well as a process for
> 
> I had in mind "email the chair your topic suggestion" and then "the
> chair emails the agenda to openstack-dev tagged [tc] a bit in advance
> of
> the meeting". There would also probably be some standing topics, like
> updates for ongoing projects.
> 
> Does that work for everyone?
> 
> 

Fine for me


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] update 18-40

2018-10-05 Thread melanie witt

On Fri, 5 Oct 2018 14:31:05 +0100 (BST), Chris Dent wrote:

*
Propose counting quota usage from placement and API database
(A bit out of date but may be worth resurrecting)


I'd like to resurrect this spec but it really depends on being able to 
ask for usage scoped only to a particular instance of Nova (GET /usages 
for NovaA vs GET /usages for NovaB). From what I understand, we don't 
have a concrete plan for being able to differentiate ownership of 
allocations yet.


Until then, we will be using a policy-based switch to control the quota 
behavior in the event of down/slow cells in a multi-cell deployment 
(fail build if project has servers in down/slow cells vs allow 
potentially violating quota limits if project has servers in down/slow 
cells). So, being able to leverage the placement API for /usages is not 
considered critical, since we have an interim plan.


-melanie




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] [infra] remove driverfixes/ocata branch [was: Re: [cinder][infra] Remove driverfixes/ocata branch]

2018-10-05 Thread Tom Barron

On 05/10/18 13:06 -0700, Clark Boylan wrote:

On Fri, Oct 5, 2018, at 12:44 PM, Tom Barron wrote:

Clark, would you be so kind, at your conveniencew, as to remove the
manila driverfixes/ocata branch?

There are no open changes on the branch and `git log
origin/driverfixes/ocata ^origin/stable/ocata --no-merges --oneline`
reveals no commits that we need to preserve.

Thanks much!



Done. The old head of that branch was d9c0f8fa4b15a595ed46950b6e5b5d1b4514a7e4.

Clark


Awesome, and thanks again!

-- Tom


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [manila] [infra] remove driverfixes/ocata branch [was: Re: [cinder][infra] Remove driverfixes/ocata branch]

2018-10-05 Thread Clark Boylan
On Fri, Oct 5, 2018, at 12:44 PM, Tom Barron wrote:
> Clark, would you be so kind, at your conveniencew, as to remove the 
> manila driverfixes/ocata branch?
> 
> There are no open changes on the branch and `git log 
> origin/driverfixes/ocata ^origin/stable/ocata --no-merges --oneline` 
> reveals no commits that we need to preserve.
> 
> Thanks much!
> 

Done. The old head of that branch was d9c0f8fa4b15a595ed46950b6e5b5d1b4514a7e4.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] [infra] remove driverfixes/ocata branch [was: Re: [cinder][infra] Remove driverfixes/ocata branch]

2018-10-05 Thread Tom Barron
Clark, would you be so kind, at your conveniencew, as to remove the 
manila driverfixes/ocata branch?


There are no open changes on the branch and `git log 
origin/driverfixes/ocata ^origin/stable/ocata --no-merges --oneline` 
reveals no commits that we need to preserve.


Thanks much!

-- Tom Barron (tbarron)

On 17/09/18 08:36 -0700, Clark Boylan wrote:

On Mon, Sep 17, 2018, at 8:00 AM, Sean McGinnis wrote:

Hello Cinder and Infra teams. Cinder needs some help from infra or some
pointers on how to proceed.

tl;dr - The openstack/cinder repo had a driverfixes/ocata branch created for
fixes that no longer met the more restrictive phase II stable policy criteria.
Extended maintenance has changed that and we want to delete driverfixes/ocata
to make sure patches are going to the right place.

Background
--
Before the extended maintenance changes, the Cinder team found a lot of vendors
were maintaining their own forks to keep backported driver fixes that we were
not allowing upstream due to the stable policy being more restrictive for older
(or deleted) branches. We created the driverfixes/* branches as a central place
for these to go so distros would have one place to grab these fixes, if they
chose to do so.

This has worked great IMO, and we do occasionally still have things that need
to go to driverfixes/mitaka and driverfixes/newton. We had also pushed a lot of
fixes to driverfixes/ocata, but with the changes to stable policy with extended
maintenance, that is no longer needed.

Extended Maintenance Changes

With things being somewhat relaxed with the extended maintenance changes, we
are now able to backport bug fixes to stable/ocata that we couldn't before and
we don't have to worry as much about that branch being deleted.

I had gone through and identified all patches backported to driverfixes/ocata
but not stable/ocata and cherry-picked them over to get the two branches in
sync. The stable/ocata should now be identical or ahead of driverfixes/ocata
and we want to make sure nothing more gets accidentally merged to
driverfixes/ocata instead of the official stable branch.

Plan

We would now like to have the driverfixes/ocata branch deleted so there is no
confusion about where backports should go and we don't accidentally get these
out of sync again.

Infra team, please delete this branch or let me know if there is a process
somewhere I should follow to have this removed.


The first step is to make sure that all changes on the branch are in a non open 
state (merged or abandoned). 
https://review.openstack.org/#/q/project:openstack/cinder+branch:driverfixes/ocata+status:open
 shows that there are no open changes.

Next you will want to make sure that the commits on this branch are preserved 
somehow. Git garbage collection will delete and cleanup commits if they are not 
discoverable when working backward from some ref. This is why our old stable 
branch deletion process required we tag the stable branch as $release-eol 
first. Looking at `git log origin/driverfixes/ocata ^origin/stable/ocata 
--no-merges --oneline` there are quite a few commits on the driverfixes branch 
that are not on the stable branch, but that appears to be due to cherry pick 
writing new commits. You have indicated above that you believe the two branches 
are in sync at this point. A quick sampling of commits seems to confirm this as 
well.

If you can go ahead and confirm that you are ready to delete the 
driverfixes/ocata branch I will go ahead and remove it.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][python3][heat][manila][qinling][zaqar][magnum][keystone][congress] switching python package jobs

2018-10-05 Thread Doug Hellmann
Doug Hellmann  writes:

> Doug Hellmann  writes:
>
>> Doug Hellmann  writes:
>>
>>> I think we are ready to go ahead and switch all of the python packaging
>>> jobs to the new set defined in the publish-to-pypi-python3 template
>>> [1]. We still have some cleanup patches for projects that have not
>>> completed their zuul migration, but there are only a few and rebasing
>>> those will be easy enough.
>>>
>>> The template adds a new check job that runs when any files related to
>>> packaging are changed (readme, setup, etc.). Otherwise it switches from
>>> the python2-based PyPI job to use python3.
>>>
>>> I have the patch to switch all official projects ready in [2].
>>>
>>> Doug
>>>
>>> [1] 
>>> http://git.openstack.org/cgit/openstack-infra/openstack-zuul-jobs/tree/zuul.d/project-templates.yaml#n218
>>> [2] https://review.openstack.org/#/c/598323/
>>
>> This change is now in place. The Ironic team discovered one issue, and
>> the fix is proposed as https://review.openstack.org/606152
>>
>> This change has also reopened the question of how to publish some of the
>> projects for which we do not own names on PyPI.
>>
>> I registered manila, qinling, and zaqar-ui by uploading Rocky series
>> releases of those projects and then added openstackci as an owner so we
>> can upload new packages this cycle.
>>
>> I asked the owners of the name "heat" to allow us to use it, and they
>> rejected the request. So, I proposed a change to heat to update the
>> sdist name to "openstack-heat".
>>
>> * https://review.openstack.org/606160
>>
>> We don't own "magnum" but there is already an "openstack-magnum" set up
>> with old releases, so I have proposed a change to the magnum repo to
>> change the dist name there, so we can resume using it.
>>
>> * https://review.openstack.org/606162
>
> The owner of the name "magnum" has given us access, so I have set it up
> with permission for the CI system to publish and I have abandoned the
> rename patch.
>
>> I have filed requests with the maintainers of PyPI to claim the names
>> "keystone" and "congress". That may take some time. Please let me know
>> if you're willing to simply use "openstack-keystone" and
>> "openstack-congress" instead. I will take care of configuring PyPI and
>> proposing the patch to update your setup.cfg (that way you can approve
>> the change).
>>
>> * https://github.com/pypa/warehouse/issues/4770
>> * https://github.com/pypa/warehouse/issues/4771

We haven't heard back about either of these requests, so I filed changes
with congress and keystone to change the dist names.

* https://review.openstack.org/608332 (congress)
* https://review.openstack.org/608331 (keystone)

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][python3][heat][manila][qinling][zaqar][magnum][keystone][congress] switching python package jobs

2018-10-05 Thread Doug Hellmann
Doug Hellmann  writes:

> Doug Hellmann  writes:
>
>> I think we are ready to go ahead and switch all of the python packaging
>> jobs to the new set defined in the publish-to-pypi-python3 template
>> [1]. We still have some cleanup patches for projects that have not
>> completed their zuul migration, but there are only a few and rebasing
>> those will be easy enough.
>>
>> The template adds a new check job that runs when any files related to
>> packaging are changed (readme, setup, etc.). Otherwise it switches from
>> the python2-based PyPI job to use python3.
>>
>> I have the patch to switch all official projects ready in [2].
>>
>> Doug
>>
>> [1] 
>> http://git.openstack.org/cgit/openstack-infra/openstack-zuul-jobs/tree/zuul.d/project-templates.yaml#n218
>> [2] https://review.openstack.org/#/c/598323/
>
> This change is now in place. The Ironic team discovered one issue, and
> the fix is proposed as https://review.openstack.org/606152
>
> This change has also reopened the question of how to publish some of the
> projects for which we do not own names on PyPI.
>
> I registered manila, qinling, and zaqar-ui by uploading Rocky series
> releases of those projects and then added openstackci as an owner so we
> can upload new packages this cycle.
>
> I asked the owners of the name "heat" to allow us to use it, and they
> rejected the request. So, I proposed a change to heat to update the
> sdist name to "openstack-heat".
>
> * https://review.openstack.org/606160
>
> We don't own "magnum" but there is already an "openstack-magnum" set up
> with old releases, so I have proposed a change to the magnum repo to
> change the dist name there, so we can resume using it.
>
> * https://review.openstack.org/606162

The owner of the name "magnum" has given us access, so I have set it up
with permission for the CI system to publish and I have abandoned the
rename patch.

> I have filed requests with the maintainers of PyPI to claim the names
> "keystone" and "congress". That may take some time. Please let me know
> if you're willing to simply use "openstack-keystone" and
> "openstack-congress" instead. I will take care of configuring PyPI and
> proposing the patch to update your setup.cfg (that way you can approve
> the change).
>
> * https://github.com/pypa/warehouse/issues/4770
> * https://github.com/pypa/warehouse/issues/4771
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][qa] Enabling online volume_extend tests by default

2018-10-05 Thread Erlon Cruz
Hey folks,

Following up on the discussions that we had on the Denver PTG, the Cinder
team
is planning to enable online volume_extend tests[1] to be run by default.
Currently,
those tests are only run by some CI systems and infra jobs that explicitly
set it to
be so.

We are also adding a negative test and an associated option  in tempest[2]
to allow
vendor drivers that does not support online extending to be tested. This
patch will
be merged first and after a reasonable time for people check whether their
backends supports that or not, we will proceed and merge the devstack
patch[1]
triggering the tests in all CIs and infra jobs.

Please let us know if you have any question or concerns about it.

Kind regards,
Erlon
_
[1] https://review.openstack.org/#/c/572188/
[2] https://review.openstack.org/#/c/578463/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Sphinx testing fun

2018-10-05 Thread Stephen Finucane
On Thu, 2018-10-04 at 18:00 -0400, Doug Hellmann wrote:
> Stephen Finucane  writes:
> 
> > On Thu, 2018-10-04 at 07:21 -0400, Doug Hellmann wrote:
> > > Stephen Finucane  writes:
> 
> [snip]
> 
> > > > Anyway, we can go figure out what's changed here and handle it but this
> > > > is, at best, going to be a band aid. The fact is 'sphinx_testing' is
> > > > unmaintained and has been for some time now. The new hotness is
> > > > 'sphinx.testing' [3], which is provided (with zero documentation) as
> > > > part of Sphinx. Unfortunately, this uses pytest fixtures [4] which I'm
> > > > pretty sure Monty (and a few others?) are vehemently against using in
> > > > OpenStack. That leaves us with three options:
> > > > 
> > > >  * Take over 'sphinx_testing' and bring it up-to-date. Maintain
> > > >forever.
> > > >  * Start using 'sphinx.testing' and everything it comes with
> > > >  * Delete any tests that use 'sphinx_testing' and deal with the lack of
> > > >coverage
> > > 
> > > Could we change our tests to use pathlib to wrap app.outdir and get the
> > > same results as before?
> > 
> > That's what I've done [2], which is kind of based on how I fixed this
> > in Sphinx. However, this is at best a stopgap. The fact remains that
> > 'sphinx_testing' is dead and the large changes that Sphinx is
> > undergoing (2.0 will be Python 3 only, with multiple other fixes) will
> > make further breakages more likely. Unless we want a repeat of the Mox
> > situation, I do think we should start thinking about this sooner rather
> > than later.
> 
> Yeah, it sounds like we need to deal with the change.
> 
> It looks like only the os-api-ref repo uses sphinx-testing. How many
> tests are we talking about having to rewrite/update there?
> 
> Doug

That's good news. I'd expected other projects would use it but then
nothing I've worked on does (and that likely constitutes a large
percentage of Sphinx extensions in OpenStack). I see four failing tests
so I guess, if they break again, we can opt for option 3 above and deal
with it. I can't see os-api-ref changing too much in the future
(barring adding PDF support at some point).

Stephen


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][searchlight] What should I do with the missing releases?

2018-10-05 Thread Trinh Nguyen
Thank Doug.

On Fri, Oct 5, 2018 at 8:42 PM Doug Hellmann  wrote:

> Trinh Nguyen  writes:
>
> > Dear release team,
> >
> > One thing comes up in my mind when preparing for the stein-1 release of
> > Searchlight that is what should we do with the missing releases (i.e.
> > Rocky)? Can I just ignore it or do I have to create a branch for it?
>
> There was no rocky release, so I don't really see any reason to create
> the branch. There isn't anything to maintain.
>
> Doug
>


-- 
*Trinh Nguyen*
*www.edlab.xyz *
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][stable] Preparing for ocata-em (extended maintenance)

2018-10-05 Thread Matt Riedemann

The ocata-em tag request is up for review:

https://review.openstack.org/#/c/608296/

On 9/28/2018 11:21 AM, Matt Riedemann wrote:
Per the other thread on this [1] I've created an etherpad [2] to track 
what needs to happen to get nova's stable/ocata branch ready for 
Extended Maintenance [3] which means we need to flush our existing Ocata 
backports that we want in the final Ocata release before tagging the 
branch as ocata-em, after which point we won't do releases from that 
branch anymore.


The etherpad lists each open ocata backport along with any of its 
related backports on newer branches like pike/queens/etc. Since we need 
the backports to go in order, we need to review and merge the changes on 
the newer branches first. With the state of the gate lately, we really 
can't sit on our hands here because it will probably take up to a week 
just to merge all of the changes for each branch.


Once the Ocata backports are flushed through, we'll cut the final 
release and tag the branch as being in extended maintenance.


Do we want to coordinate a review day next week for the 
nova-stable-maint core team, like Tuesday, or just trust that you all 
know who you are and will help out as necessary in getting these reviews 
done? Non-stable cores are also welcome to help review here to make sure 
we're not missing something, which is also a good way to get noticed as 
caring about stable branches and eventually get you on the stable maint 
core team.


[1] 
http://lists.openstack.org/pipermail/openstack-dev/2018-September/thread.html#134810 


[2] https://etherpad.openstack.org/p/nova-ocata-em
[3] 
https://docs.openstack.org/project-team-guide/stable-branches.html#extended-maintenance



--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Keystone Team Update - Week of 1 October 2018

2018-10-05 Thread Morgan Fainberg
On Fri, Oct 5, 2018, 07:04 Colleen Murphy  wrote:

> # Keystone Team Update - Week of 1 October 2018
>
> ## News
>
> ### JSON-home
>
> As Morgan works through the flaskification project, it's been clear that
> some of the JSON-home[1] code could use some refactoring and that the
> document itself is inconsistent[2], but we're unclear whether anyone uses
> this or cares if it changes. If you have ever used keystone's JSON-home
> implementation, come talk to us.
>
> [1] https://adam.younglogic.com/2018/01/using-json-home-keystone/
> [2]
> http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-10-02.log.html#t2018-10-02T18:22:25
>
> ## Open Specs
>
> Search query: https://bit.ly/2Pi6dGj
>
> We still only have three specs targeted at Stein, but Adam has revived
> several "ongoing" specs that can use some eyes, please take a look[3].
>
> [3] https://bit.ly/2OyDLTh
>
> ## Recently Merged Changes
>
> Search query: https://bit.ly/2pquOwT
>
> We merged 19 changes this week.
>
> ## Changes that need Attention
>
> Search query: https://bit.ly/2PUk84S
>
> There are 41 changes that are passing CI, not in merge conflict, have no
> negative reviews and aren't proposed by bots.
>
> One of these is a proposal to add rate-limiting to keystoneauth[4], would
> be good to get some more reactions to it.
>
> Another is the flaskification patch of doom[5] which will definitely need
> some close attention.
>
> [4] https://review.openstack.org/605043
> [5] https://review.openstack.org/603461
>
> ## Bugs
>
> This week we opened 5 new bugs and closed 7.
>
> Bugs opened (5)
> Bug #1795487 (keystone:Undecided) opened by Amy Marrich
> https://bugs.launchpad.net/keystone/+bug/1795487
> Bug #1795800 (keystone:Undecided) opened by Andy Ngo
> https://bugs.launchpad.net/keystone/+bug/1795800
> Bug #1796077 (keystone:Undecided) opened by Ching Kuo
> https://bugs.launchpad.net/keystone/+bug/1796077
> Bug #1796247 (keystone:Undecided) opened by Yang Youseok
> https://bugs.launchpad.net/keystone/+bug/1796247
> Bug #1795496 (oslo.policy:Undecided) opened by Adam Young
> https://bugs.launchpad.net/oslo.policy/+bug/1795496
>
> Bugs closed (3)
> Bug #1782687 (keystone:Undecided)
> https://bugs.launchpad.net/keystone/+bug/1782687
> Bug #1796077 (keystone:Undecided)
> https://bugs.launchpad.net/keystone/+bug/1796077
> Bug #1796247 (keystone:Undecided)
> https://bugs.launchpad.net/keystone/+bug/1796247
>
> Bugs fixed (4)
> Bug #1794552 (keystone:Medium) fixed by Morgan Fainberg
> https://bugs.launchpad.net/keystone/+bug/1794552
> Bug #1753585 (keystone:Low) fixed by Vishakha Agarwal
> https://bugs.launchpad.net/keystone/+bug/1753585
> Bug #1615076 (keystone:Undecided) fixed by Vishakha Agarwal
> https://bugs.launchpad.net/keystone/+bug/1615076
> Bug #1615076 (python-keystoneclient:Undecided) fixed by Vishakha Agarwal
> https://bugs.launchpad.net/python-keystoneclient/+bug/1615076
>
> ## Milestone Outlook
>
> https://releases.openstack.org/stein/schedule.html
>
> Now just 3 weeks away from the spec proposal freeze.
>
> ## Help with this newsletter
>
> Help contribute to this newsletter by editing the etherpad:
> https://etherpad.openstack.org/p/keystone-team-newsletter
> Dashboard generated using gerrit-dash-creator and
> https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67



As an update to JSON Home bits, I have worked around the possible needed
changes. The document should remain the same as before.

--Morgan

>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ryu integration with Openstack

2018-10-05 Thread Niket Agrawal
Hi,

>From what I read so far about the Dragonflow project, it implements a
distributed SDN controller, ie, there is a controller running in each of
the compute nodes managing the openvswitch instance in that node. This is
also what currently happens with the openvswitch agent on each node running
a Ryu app. Not sure if you misread my previous email, I'd like to remove
this distributed style of SDN controllers running in each node, and have a
central controller managing every switch. I prefer to have Ryu as my
central controller as designing a Ryu app is quite simple. Nevertheless,
thanks for mentioning about the dragon flow project.

Regards,
Niket

On Fri, Oct 5, 2018 at 5:03 PM Niket Agrawal  wrote:

> Thank you. I will have a look.
>
> Regards,
> Niket
>
> On Fri, Oct 5, 2018 at 4:15 PM Miguel Angel Ajo Pelayo <
> majop...@redhat.com> wrote:
>
>> have a look at dragonflow project, may be it's similar to what you're
>> trying to accomplish
>>
>> On Fri, Oct 5, 2018, 1:56 PM Niket Agrawal  wrote:
>>
>>> Hi,
>>>
>>> Thanks for the help. I am trying to run a custom Ryu app from the nova
>>> compute node and have all the openvswitches connected to this new
>>> controller. However, to be able to run this new app, I have to first stop
>>> the existing neutron openvswitch agents in the same node as they run Ryu
>>> app (integrated in Openstack) by default. Ryu in Openstack provides basic
>>> functionalities like L2 switching but does not support launching a custom
>>> app at the same time.
>>> I'd like to have a single instance of Ryu controller control all the
>>> openvswtich instances rather than having openvswitch agents in each node
>>> managing the openvswitches separately. For this, I'll probably have to
>>> migrate the existing functionality provided by Ryu app to this new app of
>>> mine. Could you share some suggestions or are you aware of any previous
>>> work done towards this, that I can read about?
>>>
>>> Regards,
>>> Niket
>>>
>>> On Thu, Sep 27, 2018 at 9:21 AM Slawomir Kaplonski 
>>> wrote:
>>>
 Hi,

 Code of app is in
 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_ryuapp.py
 and classes for specific bridge types are in
 https://github.com/openstack/neutron/tree/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native

 > Wiadomość napisana przez Niket Agrawal  w dniu
 27.09.2018, o godz. 00:08:
 >
 > Hi,
 >
 > Thanks for your reply. Is there a way to access the code that is
 running in the app to see what is the logic implemented in the app?
 >
 > Regards,
 > Niket
 >
 > On Wed, Sep 26, 2018 at 10:31 PM Slawomir Kaplonski <
 skapl...@redhat.com> wrote:
 > Hi,
 >
 > > Wiadomość napisana przez Niket Agrawal  w
 dniu 26.09.2018, o godz. 18:11:
 > >
 > > Hello,
 > >
 > > I have a question regarding the Ryu integration in Openstack. By
 default, the openvswitch bridges (br-int, br-tun and br-ex) are registered
 to a controller running on 127.0.0.1 and port 6633. The output of ovs-vsctl
 get-manager is ptcp:127.0.0.1:6640. This is noticed on the nova
 compute node. However there is a different instance of the same Ryu
 controller running on the neutron gateway as well and the three openvswitch
 bridges (br-int, br-tun and br-ex) are registered to this instance of Ryu
 controller. If I stop neutron-openvswitch agent on the nova compute node,
 the bridges there are no longer connected to the controller, but the
 bridges in the neutron gateway continue to remain connected to the
 controller. Only when I stop the neutron openvswitch agent in the neutron
 gateway as well, the bridges there get disconnected.
 > >
 > > I'm unable to find where in the Openstack code I can access this
 implementation, because I intend to make a few tweaks to this architecture
 which is present currently. Also, I'd like to know which app is the Ryu SDN
 controller running by default at the moment. I feel the information in the
 code can help me find it too.
 >
 > Ryu app is started by neutron-openvswitch-agent in:
 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/main.py#L34
 > Is it what You are looking for?
 >
 > >
 > > Regards,
 > > Niket
 > >
 __
 > > OpenStack Development Mailing List (not for usage questions)
 > > Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >
 > —
 > Slawek Kaplonski
 > Senior software engineer
 > Red Hat
 >
 >
 >
 __
 > OpenStack Development 

Re: [openstack-dev] Ryu integration with Openstack

2018-10-05 Thread Niket Agrawal
Thank you. I will have a look.

Regards,
Niket

On Fri, Oct 5, 2018 at 4:15 PM Miguel Angel Ajo Pelayo 
wrote:

> have a look at dragonflow project, may be it's similar to what you're
> trying to accomplish
>
> On Fri, Oct 5, 2018, 1:56 PM Niket Agrawal  wrote:
>
>> Hi,
>>
>> Thanks for the help. I am trying to run a custom Ryu app from the nova
>> compute node and have all the openvswitches connected to this new
>> controller. However, to be able to run this new app, I have to first stop
>> the existing neutron openvswitch agents in the same node as they run Ryu
>> app (integrated in Openstack) by default. Ryu in Openstack provides basic
>> functionalities like L2 switching but does not support launching a custom
>> app at the same time.
>> I'd like to have a single instance of Ryu controller control all the
>> openvswtich instances rather than having openvswitch agents in each node
>> managing the openvswitches separately. For this, I'll probably have to
>> migrate the existing functionality provided by Ryu app to this new app of
>> mine. Could you share some suggestions or are you aware of any previous
>> work done towards this, that I can read about?
>>
>> Regards,
>> Niket
>>
>> On Thu, Sep 27, 2018 at 9:21 AM Slawomir Kaplonski 
>> wrote:
>>
>>> Hi,
>>>
>>> Code of app is in
>>> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_ryuapp.py
>>> and classes for specific bridge types are in
>>> https://github.com/openstack/neutron/tree/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native
>>>
>>> > Wiadomość napisana przez Niket Agrawal  w dniu
>>> 27.09.2018, o godz. 00:08:
>>> >
>>> > Hi,
>>> >
>>> > Thanks for your reply. Is there a way to access the code that is
>>> running in the app to see what is the logic implemented in the app?
>>> >
>>> > Regards,
>>> > Niket
>>> >
>>> > On Wed, Sep 26, 2018 at 10:31 PM Slawomir Kaplonski <
>>> skapl...@redhat.com> wrote:
>>> > Hi,
>>> >
>>> > > Wiadomość napisana przez Niket Agrawal  w dniu
>>> 26.09.2018, o godz. 18:11:
>>> > >
>>> > > Hello,
>>> > >
>>> > > I have a question regarding the Ryu integration in Openstack. By
>>> default, the openvswitch bridges (br-int, br-tun and br-ex) are registered
>>> to a controller running on 127.0.0.1 and port 6633. The output of ovs-vsctl
>>> get-manager is ptcp:127.0.0.1:6640. This is noticed on the nova compute
>>> node. However there is a different instance of the same Ryu controller
>>> running on the neutron gateway as well and the three openvswitch bridges
>>> (br-int, br-tun and br-ex) are registered to this instance of Ryu
>>> controller. If I stop neutron-openvswitch agent on the nova compute node,
>>> the bridges there are no longer connected to the controller, but the
>>> bridges in the neutron gateway continue to remain connected to the
>>> controller. Only when I stop the neutron openvswitch agent in the neutron
>>> gateway as well, the bridges there get disconnected.
>>> > >
>>> > > I'm unable to find where in the Openstack code I can access this
>>> implementation, because I intend to make a few tweaks to this architecture
>>> which is present currently. Also, I'd like to know which app is the Ryu SDN
>>> controller running by default at the moment. I feel the information in the
>>> code can help me find it too.
>>> >
>>> > Ryu app is started by neutron-openvswitch-agent in:
>>> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/main.py#L34
>>> > Is it what You are looking for?
>>> >
>>> > >
>>> > > Regards,
>>> > > Niket
>>> > >
>>> __
>>> > > OpenStack Development Mailing List (not for usage questions)
>>> > > Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>> > —
>>> > Slawek Kaplonski
>>> > Senior software engineer
>>> > Red Hat
>>> >
>>> >
>>> >
>>> __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> >
>>> __
>>> > OpenStack Development Mailing List (not for usage questions)
>>> > Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> —
>>> Slawek Kaplonski
>>> Senior software engineer
>>> Red Hat
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> 

Re: [openstack-dev] [placement] update 18-40

2018-10-05 Thread Eric Fried
> * What should we do about nova calling the placement db, like in
>  
> [nova-manage](https://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L416)

This should be purely a placement-side migration, nah?

>   and
>  
> [nova-status](https://github.com/openstack/nova/blob/master/nova/cmd/status.py#L254).

For others' reference, Chris and I have been discussing this [1] in the
spec review that was prompted by the above. As of the last episode: a)
we're not convinced this status check is worth having in the first
place; but if it is, b) the algorithm being used currently is pretty
weak, and will soon be actually bogus; and c) there's a suggestion for a
"better" (if not particularly efficient) alternative that uses the API
instead of going directly to the db.

-efried

[1]
https://review.openstack.org/#/c/600016/3/specs/stein/approved/list-rps-having.rst@49

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ryu integration with Openstack

2018-10-05 Thread Miguel Angel Ajo Pelayo
have a look at dragonflow project, may be it's similar to what you're
trying to accomplish

On Fri, Oct 5, 2018, 1:56 PM Niket Agrawal  wrote:

> Hi,
>
> Thanks for the help. I am trying to run a custom Ryu app from the nova
> compute node and have all the openvswitches connected to this new
> controller. However, to be able to run this new app, I have to first stop
> the existing neutron openvswitch agents in the same node as they run Ryu
> app (integrated in Openstack) by default. Ryu in Openstack provides basic
> functionalities like L2 switching but does not support launching a custom
> app at the same time.
> I'd like to have a single instance of Ryu controller control all the
> openvswtich instances rather than having openvswitch agents in each node
> managing the openvswitches separately. For this, I'll probably have to
> migrate the existing functionality provided by Ryu app to this new app of
> mine. Could you share some suggestions or are you aware of any previous
> work done towards this, that I can read about?
>
> Regards,
> Niket
>
> On Thu, Sep 27, 2018 at 9:21 AM Slawomir Kaplonski 
> wrote:
>
>> Hi,
>>
>> Code of app is in
>> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_ryuapp.py
>> and classes for specific bridge types are in
>> https://github.com/openstack/neutron/tree/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native
>>
>> > Wiadomość napisana przez Niket Agrawal  w dniu
>> 27.09.2018, o godz. 00:08:
>> >
>> > Hi,
>> >
>> > Thanks for your reply. Is there a way to access the code that is
>> running in the app to see what is the logic implemented in the app?
>> >
>> > Regards,
>> > Niket
>> >
>> > On Wed, Sep 26, 2018 at 10:31 PM Slawomir Kaplonski <
>> skapl...@redhat.com> wrote:
>> > Hi,
>> >
>> > > Wiadomość napisana przez Niket Agrawal  w dniu
>> 26.09.2018, o godz. 18:11:
>> > >
>> > > Hello,
>> > >
>> > > I have a question regarding the Ryu integration in Openstack. By
>> default, the openvswitch bridges (br-int, br-tun and br-ex) are registered
>> to a controller running on 127.0.0.1 and port 6633. The output of ovs-vsctl
>> get-manager is ptcp:127.0.0.1:6640. This is noticed on the nova compute
>> node. However there is a different instance of the same Ryu controller
>> running on the neutron gateway as well and the three openvswitch bridges
>> (br-int, br-tun and br-ex) are registered to this instance of Ryu
>> controller. If I stop neutron-openvswitch agent on the nova compute node,
>> the bridges there are no longer connected to the controller, but the
>> bridges in the neutron gateway continue to remain connected to the
>> controller. Only when I stop the neutron openvswitch agent in the neutron
>> gateway as well, the bridges there get disconnected.
>> > >
>> > > I'm unable to find where in the Openstack code I can access this
>> implementation, because I intend to make a few tweaks to this architecture
>> which is present currently. Also, I'd like to know which app is the Ryu SDN
>> controller running by default at the moment. I feel the information in the
>> code can help me find it too.
>> >
>> > Ryu app is started by neutron-openvswitch-agent in:
>> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/main.py#L34
>> > Is it what You are looking for?
>> >
>> > >
>> > > Regards,
>> > > Niket
>> > >
>> __
>> > > OpenStack Development Mailing List (not for usage questions)
>> > > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> > —
>> > Slawek Kaplonski
>> > Senior software engineer
>> > Red Hat
>> >
>> >
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> —
>> Slawek Kaplonski
>> Senior software engineer
>> Red Hat
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] [Horizon] Horizon tutorial didn`t work

2018-10-05 Thread Ivan Kolodyazhny
Hi Jea-Min,

I filed a bug [1] and proposed a fix for it [2]

[1] https://bugs.launchpad.net/horizon/+bug/1796312
[2] https://review.openstack.org/608274

Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/


On Tue, Oct 2, 2018 at 6:55 AM Jea-Min Lim  wrote:

> Thanks for the reply.
>
> If you need any detailed information, let me know.
>
> Regards,
>
> 2018년 10월 1일 (월) 오후 6:53, Ivan Kolodyazhny 님이 작성:
>
>> Hi  Jea-Min,
>>
>> Thank you for your report. I'll check the manual and fix it asap.
>>
>>
>> Regards,
>> Ivan Kolodyazhny,
>> http://blog.e0ne.info/
>>
>>
>> On Mon, Oct 1, 2018 at 9:38 AM Jea-Min Lim  wrote:
>>
>>> Hello everyone,
>>>
>>> I`m following a tutorial of Building a Dashboard using Horizon.
>>> (link:
>>> https://docs.openstack.org/horizon/latest/contributor/tutorials/dashboard.html#tutorials-dashboard
>>> )
>>>
>>> However, provided custom management command doesn't create boilerplate
>>> code.
>>>
>>> I typed tox -e manage -- startdash mydashboard --target
>>> openstack_dashboard/dashboards/mydashboard
>>>
>>> and the attached screenshot file is the execution result.
>>>
>>> Are there any recommendations to solve this problem?
>>>
>>> Regards.
>>>
>>> [image: result_jmlim.PNG]
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Keystone Team Update - Week of 1 October 2018

2018-10-05 Thread Colleen Murphy
# Keystone Team Update - Week of 1 October 2018

## News

### JSON-home

As Morgan works through the flaskification project, it's been clear that some 
of the JSON-home[1] code could use some refactoring and that the document 
itself is inconsistent[2], but we're unclear whether anyone uses this or cares 
if it changes. If you have ever used keystone's JSON-home implementation, come 
talk to us.

[1] https://adam.younglogic.com/2018/01/using-json-home-keystone/
[2] 
http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-10-02.log.html#t2018-10-02T18:22:25

## Open Specs

Search query: https://bit.ly/2Pi6dGj

We still only have three specs targeted at Stein, but Adam has revived several 
"ongoing" specs that can use some eyes, please take a look[3].

[3] https://bit.ly/2OyDLTh

## Recently Merged Changes

Search query: https://bit.ly/2pquOwT

We merged 19 changes this week.

## Changes that need Attention

Search query: https://bit.ly/2PUk84S

There are 41 changes that are passing CI, not in merge conflict, have no 
negative reviews and aren't proposed by bots.

One of these is a proposal to add rate-limiting to keystoneauth[4], would be 
good to get some more reactions to it.

Another is the flaskification patch of doom[5] which will definitely need some 
close attention.

[4] https://review.openstack.org/605043
[5] https://review.openstack.org/603461

## Bugs

This week we opened 5 new bugs and closed 7.

Bugs opened (5) 
Bug #1795487 (keystone:Undecided) opened by Amy Marrich 
https://bugs.launchpad.net/keystone/+bug/1795487 
Bug #1795800 (keystone:Undecided) opened by Andy Ngo 
https://bugs.launchpad.net/keystone/+bug/1795800 
Bug #1796077 (keystone:Undecided) opened by Ching Kuo 
https://bugs.launchpad.net/keystone/+bug/1796077 
Bug #1796247 (keystone:Undecided) opened by Yang Youseok 
https://bugs.launchpad.net/keystone/+bug/1796247 
Bug #1795496 (oslo.policy:Undecided) opened by Adam Young 
https://bugs.launchpad.net/oslo.policy/+bug/1795496 

Bugs closed (3) 
Bug #1782687 (keystone:Undecided) 
https://bugs.launchpad.net/keystone/+bug/1782687 
Bug #1796077 (keystone:Undecided) 
https://bugs.launchpad.net/keystone/+bug/1796077 
Bug #1796247 (keystone:Undecided) 
https://bugs.launchpad.net/keystone/+bug/1796247 

Bugs fixed (4) 
Bug #1794552 (keystone:Medium) fixed by Morgan Fainberg 
https://bugs.launchpad.net/keystone/+bug/1794552 
Bug #1753585 (keystone:Low) fixed by Vishakha Agarwal 
https://bugs.launchpad.net/keystone/+bug/1753585 
Bug #1615076 (keystone:Undecided) fixed by Vishakha Agarwal 
https://bugs.launchpad.net/keystone/+bug/1615076 
Bug #1615076 (python-keystoneclient:Undecided) fixed by Vishakha Agarwal 
https://bugs.launchpad.net/python-keystoneclient/+bug/1615076

## Milestone Outlook

https://releases.openstack.org/stein/schedule.html

Now just 3 weeks away from the spec proposal freeze.

## Help with this newsletter

Help contribute to this newsletter by editing the etherpad: 
https://etherpad.openstack.org/p/keystone-team-newsletter
Dashboard generated using gerrit-dash-creator and 
https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [placement] update 18-40

2018-10-05 Thread Chris Dent


HTML: https://anticdent.org/placement-update-18-40.html

Here's this week's placement update. We remain focused on
specs and pressing issues with extraction, mostly because until the
extraction is "done" in some form doing much other work is a bit
premature.

# Most Important

There have been several discussions recently about what to do with
options that impact both scheduling and configuration. Some of this
was in the thread about [intended purposes of
traits](http://lists.openstack.org/pipermail/openstack-dev/2018-October/thread.html#135301),
but more recently there was discussion on how to support guests
that want an HPET. Chris Friesen [summarized a
hangout](http://lists.openstack.org/pipermail/openstack-dev/2018-October/135446.html)
that happened yesterday that will presumably be reflected in an
[in-progress spec](https://review.openstack.org/#/c/607989/1).

The work to get [grenade upgrading to
placement](https://review.openstack.org/#/c/604454/) is very close.
After several iterations of tweaking, the grenade jobs are now
passing. There are still some adjustments to get devstack jobs
working, but the way is relatively clear. More on this in
"extraction" below, but the reason this is a most important is that
this stuff allows us to do proper integration and upgrade testing,
without which it is hard to have confidence.

# What's Changed

In both placement and nova, placement is no longer using
`get_legacy_facade()`. This will remove some annoying deprecation
warnings.

The nova->placement database migration script for MySQL has merged.
The postgresql version is still [up for
review](https://review.openstack.org/#/c/604028/).

Consumer generations are now being used in some allocation handling
in nova.

# Questions

* What should we do about nova calling the placement db, like in
  
[nova-manage](https://github.com/openstack/nova/blob/master/nova/cmd/manage.py#L416)
  and
  
[nova-status](https://github.com/openstack/nova/blob/master/nova/cmd/status.py#L254).

* Should we consider starting a new extraction etherpad? The [old
  one](https://etherpad.openstack.org/p/placement-extract-stein-3)
  has become a bit noisy and out of date.

# Bugs

* Placement related [bugs not yet in progress](https://goo.gl/TgiPXb): 17.
  -1.
* [In progress placement bugs](https://goo.gl/vzGGDQ) 8. -1.

# Specs

Many of these specs don't seem to be getting much attention. Can the
dead ones be abandoned?

* 
  Account for host agg allocation ratio in placement
  (Still in rocky/)

* 
  Add subtree filter for GET /resource_providers

* 
  Resource provider - request group mapping in allocation candidate

* 
  VMware: place instances on resource pool
  (still in rocky/)

* 
  Standardize CPU resource tracking

* 
  Allow overcommit of dedicated CPU
  (Has an alternative which changes allocations to a float)

* 
  List resource providers having inventory

* 
  Bi-directional enforcement of traits

* 
  allow transferring ownership of instance

* 
  Modelling passthrough devices for report to placement

* 
  Propose counting quota usage from placement and API database
  (A bit out of date but may be worth resurrecting)

* 
  Spec: allocation candidates in tree

* 
  [WIP] generic device discovery policy

* 
  Nova Cyborg interaction specification.

* 
  supporting virtual NVDIMM devices

* 
  Spec: Support filtering by forbidden aggregate

* 
  Proposes NUMA topology with RPs

* 
  Support initial allocation ratios

* 
  Count quota based on resource class

* 
  WIP: High Precision Event Timer (HPET) on x86 guests

* 
  Add support for emulated virtual TPM

* 
  Limit instance create max_count (spec) (has some concurrency
  issues related placement)

* 
  Adds spec for instance live resize

So many specs.

# Main Themes

## Making Nested Useful

Work on getting nova's use of nested resource providers happy and
fixing bugs discovered in placement in the process. This is creeping
ahead. There is plenty of discussion going along nearby with regards
to various ways they are being 

Re: [openstack-dev] [nova] [ironic] agreement on how to specify options that impact scheduling and configuration

2018-10-05 Thread Jay Pipes

Added [ironic] topic.

On 10/04/2018 06:06 PM, Chris Friesen wrote:
While discussing the "Add HPET timer support for x86 guests" 
blueprint[1] one of the items that came up was how to represent what are 
essentially flags that impact both scheduling and configuration.  Eric 
Fried posted a spec to start a discussion[2], and a number of nova 
developers met on a hangout to hash it out.  This is the result.


In this specific scenario the goal was to allow the user to specify that 
their image required a virtual HPET.  For efficient scheduling we wanted 
this to map to a placement trait, and the virt driver also needed to 
enable the feature when booting the instance.  (This can be generalized 
to other similar problems, including how to specify scheduling and 
configuration information for Ironic.)


We discussed two primary approaches:

The first approach was to specify an arbitrary "key=val" in flavor 
extra-specs or image properties, which nova would automatically 
translate into the appropriate placement trait before passing it to 
placement.  Once scheduled to a compute node, the virt driver would look 
for "key=val" in the flavor/image to determine how to proceed.


The second approach was to directly specify the placement trait in the 
flavor extra-specs or image properties.  Once scheduled to a compute 
node, the virt driver would look for the placement trait in the 
flavor/image to determine how to proceed.


Ultimately, the decision was made to go with the second approach.  The 
result is that it is officially acceptable for virt drivers to key off 
placement traits specified in the image/flavor in order to turn on/off 
configuration options for the instance.  If we do get down to the virt 
driver and the trait is set, and the driver for whatever reason 
determines it's not capable of flipping the switch, it should fail.


Ironicers, pay attention to the above! :) It's a green light from Nova 
to use the traits list contained in the flavor extra specs and image 
metadata when (pre-)configuring an instance.


It should be noted that it only makes sense to use placement traits for 
things that affect scheduling.  If it doesn't affect scheduling, then it 
can be stored in the flavor extra-specs or image properties separate 
from the placement traits.  Also, this approach only makes sense for 
simple booleans.  Anything requiring more complex configuration will 
likely need additional extra-spec and/or config and/or unicorn dust.


Ironicers, also pay close attention to the advice above. Things that are 
not "scheduleable" -- in other words, things that don't filter the list 
of hosts that a workload can land on -- should not go in traits.


Finally, here's the HPET os-traits patch. Reviews welcome (it's tiny patch):

https://review.openstack.org/608258

Best,
-jay


Chris

[1] https://blueprints.launchpad.net/nova/+spec/support-hpet-on-guest
[2] 
https://review.openstack.org/#/c/607989/1/specs/stein/approved/support-hpet-on-guest.rst 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] bringing back formal TC meetings

2018-10-05 Thread Julia Kreger
+1 to bringing back formal meetings. A few replies below regarding
time/agenda.

On Fri, Oct 5, 2018 at 5:38 AM Doug Hellmann  wrote:

> Thierry Carrez  writes:
>
> > Ghanshyam Mann wrote:
> >>    On Fri, 05 Oct 2018 02:47:53 +0900 Jeremy Stanley <
> fu...@yuggoth.org> wrote 
> >>   > On 2018-10-04 13:40:05 -0400 (-0400), Doug Hellmann wrote:
> >>   > [...]
> >>   > > TC members, please reply to this thread and indicate if you would
> >>   > > find meeting at 1300 UTC on the first Thursday of every month
> >>   > > acceptable, and of course include any other comments you might
> >>   > > have (including alternate times).
> >>   >
> >>   > This time is acceptable to me. As long as we ensure that community
> >>   > feedback continues more frequently in IRC and on the ML (for example
> >>   > by making it clear that this meeting is expressly *not* for that)
> >>   > then I'm fine with resuming formal meetings.
> >>
> >> +1. Time works fine for me, Thanks for considering the APAC TZ.
> >>
> >> I agree that we should keep encouraging the  usual discussion in
> existing office hours, IRC or ML. I will be definitely able to attend other
> 2 office hours (Tuesday  and Wednesday) which are suitable for my TZ.
> >
> > 1300 UTC is obviously good for me, but once we are off DST that will
> > mean 5am for our Pacific Time people (do we have any left ?).
> >
> > Maybe 1400 UTC would be a better trade-off?
>
> Julia is out west, but I think not all the way to PST.
>

My home time zone is PST. It would be awesome if we could hold the meeting
an hour later, but I can get up early in the morning once a month. If we
choose to meet more regularly, then a one hour later start would be more
appreciated if it is not too much of an inconvenience to APAC TC members.
That being said, I do typically get up early, just not 0500 early that
often.


>
> > Regarding frequency, I agree with mnaser that once per month might be
> > too rare. That means only 5-ish meetings for a given a 6-month
> > membership. But that can work if we use the meeting as a formal progress
> > status checkpoint, rather than a way to discuss complex topics.
>
> I think we can definitely manage the agenda to minimize the number of
> complex discussions. If that proves to be too hard, I wouldn't mind
> meeting more often, but there does seem to be a lot of support for
> preferring other venues for those conversations.
>
>
+1 I think there is a point where we need to recognize there is a time and
place for everything, and some of those long running complex conversations
might not be well suited for what would essentially be "review business
status" meetings.  If we have any clue that something is going to be a very
long and drawn out discussion, then I feel like we should make an effort to
schedule individually.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Heads up for changes causing restarts!

2018-10-05 Thread Tobias Urdin

Hello,

Due to bugs and fixes that has been needed we are probably going to 
merge some changes to
Puppet modules which will cause a refresh of their services meaning they 
will be restarted.


If you are following the stable branches (stable/rocky in this case) and 
not using tagged releases when you
are pulling in the Puppet OpenStack modules we want to alert you that 
restarts of services might happen

if you deploy new changes.

These two for example is bug fixes which are probably going to be 
restarted causing restart of Horizon

and Cinder services [1] [2] [3].

Feel free to reach out to us at #puppet-openstack if you have any concerns.

[1] https://review.openstack.org/#/c/608244/
[2] https://review.openstack.org/#/c/607964/ (if backported to Rocky 
later on)

[3] https://review.openstack.org/#/c/605071/

Best regards
Tobias

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ryu integration with Openstack

2018-10-05 Thread Niket Agrawal
Hi,

Thanks for the help. I am trying to run a custom Ryu app from the nova
compute node and have all the openvswitches connected to this new
controller. However, to be able to run this new app, I have to first stop
the existing neutron openvswitch agents in the same node as they run Ryu
app (integrated in Openstack) by default. Ryu in Openstack provides basic
functionalities like L2 switching but does not support launching a custom
app at the same time.
I'd like to have a single instance of Ryu controller control all the
openvswtich instances rather than having openvswitch agents in each node
managing the openvswitches separately. For this, I'll probably have to
migrate the existing functionality provided by Ryu app to this new app of
mine. Could you share some suggestions or are you aware of any previous
work done towards this, that I can read about?

Regards,
Niket

On Thu, Sep 27, 2018 at 9:21 AM Slawomir Kaplonski 
wrote:

> Hi,
>
> Code of app is in
> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/ovs_ryuapp.py
> and classes for specific bridge types are in
> https://github.com/openstack/neutron/tree/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native
>
> > Wiadomość napisana przez Niket Agrawal  w dniu
> 27.09.2018, o godz. 00:08:
> >
> > Hi,
> >
> > Thanks for your reply. Is there a way to access the code that is running
> in the app to see what is the logic implemented in the app?
> >
> > Regards,
> > Niket
> >
> > On Wed, Sep 26, 2018 at 10:31 PM Slawomir Kaplonski 
> wrote:
> > Hi,
> >
> > > Wiadomość napisana przez Niket Agrawal  w dniu
> 26.09.2018, o godz. 18:11:
> > >
> > > Hello,
> > >
> > > I have a question regarding the Ryu integration in Openstack. By
> default, the openvswitch bridges (br-int, br-tun and br-ex) are registered
> to a controller running on 127.0.0.1 and port 6633. The output of ovs-vsctl
> get-manager is ptcp:127.0.0.1:6640. This is noticed on the nova compute
> node. However there is a different instance of the same Ryu controller
> running on the neutron gateway as well and the three openvswitch bridges
> (br-int, br-tun and br-ex) are registered to this instance of Ryu
> controller. If I stop neutron-openvswitch agent on the nova compute node,
> the bridges there are no longer connected to the controller, but the
> bridges in the neutron gateway continue to remain connected to the
> controller. Only when I stop the neutron openvswitch agent in the neutron
> gateway as well, the bridges there get disconnected.
> > >
> > > I'm unable to find where in the Openstack code I can access this
> implementation, because I intend to make a few tweaks to this architecture
> which is present currently. Also, I'd like to know which app is the Ryu SDN
> controller running by default at the moment. I feel the information in the
> code can help me find it too.
> >
> > Ryu app is started by neutron-openvswitch-agent in:
> https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/openflow/native/main.py#L34
> > Is it what You are looking for?
> >
> > >
> > > Regards,
> > > Niket
> > >
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > —
> > Slawek Kaplonski
> > Senior software engineer
> > Red Hat
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> —
> Slawek Kaplonski
> Senior software engineer
> Red Hat
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Distutils] pip 18.1 has been released!

2018-10-05 Thread Doug Hellmann

Watch for changes in pip's behavior. - Doug

Pradyun Gedam  writes:

> On behalf of the PyPA, I am pleased to announce that pip 18.1 has just
> been released.
>
> To install pip 18.1, you can run::
>
>   python -m pip install --upgrade pip
>
> or use get-pip as described in
> https://pip.pypa.io/en/latest/installing. Note that
> if you are using a version of pip supplied by your distribution
> vendor, vendor-supplied
> upgrades will be available in due course.
>
> The highlights of this release are:
>
> - Python 3.7 is now supported
> - Dependency Links support is now scheduled for removal in pip 19.0
> (the next pip
>   release, scheduled in January 2019).
> - Plaform specific options can now be used with the --target option,
> to enable certain
>   workflows.
> - Much more helpful error messages on invalid configuration files
> - Many bug fixes and minor improvements
>
> Thanks to everyone who put so much effort into the new release. Many of the
> contributions came from community members, whether in the form of code,
> participation in design discussions and/or bug reports. The pip development
> team is extremely grateful to everyone in the community for their 
> contributions.
>
> Thanks,
> Pradyun
> --
> Distutils-SIG mailing list -- distutils-...@python.org
> To unsubscribe send an email to distutils-sig-le...@python.org
> https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
> Message archived at 
> https://mail.python.org/mm3/archives/list/distutils-...@python.org/message/YBYAYIXJ2WUUYCJLM7EWMQETJOW5W6ZZ/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Open API 3.0 for OpenStack API

2018-10-05 Thread Jim Rollenhagen
GraphQL has introspection features that allow clients to pull the schema
(types, queries, mutations, etc): https://graphql.org/learn/introspection/

That said, it seems like using this in a client like OpenStackSDK would get
messy quickly. Instead of asking for which versions are supported, you'd
have to fetch the schema, map it to actual features somehow, and adjust
queries based on this info.

I guess there might be a middleground where we could fetch the REST API
version, and know from that what GraphQL queries can be made.

// jim


On Fri, Oct 5, 2018 at 7:30 AM Doug Hellmann  wrote:

> Gilles Dubreuil  writes:
>
> >> About the micro version, we discuss with your team mate dmitry in
> >> another email [1]
> >
> > Obviously micro version is a point of contention.
> > My take on this is because consuming them has been proven harder than
> > developing them.
> > The beauty of GraphQL is that there is no need to deal with version at
> all.
> > New fields appears when needed and old one are marked deprecated.
>
> How does someone using GraphQL to use a cloud know when a specific field
> is available? How can they differentiate what is supported in one cloud
> from what is supported in another, running a different version of the
> same service?
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][Searchlight] Always build universal wheels

2018-10-05 Thread Jeremy Stanley
On 2018-10-05 07:20:01 -0400 (-0400), Doug Hellmann wrote:
[...]
> So, I think this all means we can leave the setup.cfg files as
> they are and not worry about updating the wheel format flag.

I continue to agree that, because of the reasons you state, it is
not urgent to update setup.cfg (either to start supporting universal
wheels or to follow the deprecation/transition on the section name
in the latest wheel release), at least for projects relying on the
OpenStack-specific release jobs. It is still technically more
correct and I don't think we should forbid individual teams from
also updating the setup.cfg files in their repositories should they
choose to do so. That's all I've been trying to say.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] bringing back formal TC meetings

2018-10-05 Thread Jeremy Stanley
On 2018-10-05 07:40:00 -0400 (-0400), Doug Hellmann wrote:
[...]
> I had in mind "email the chair your topic suggestion" and then "the
> chair emails the agenda to openstack-dev tagged [tc] a bit in advance of
> the meeting". There would also probably be some standing topics, like
> updates for ongoing projects.
> 
> Does that work for everyone?
[...]

Seems fine to me.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][searchlight] What should I do with the missing releases?

2018-10-05 Thread Doug Hellmann
Trinh Nguyen  writes:

> Dear release team,
>
> One thing comes up in my mind when preparing for the stein-1 release of
> Searchlight that is what should we do with the missing releases (i.e.
> Rocky)? Can I just ignore it or do I have to create a branch for it?

There was no rocky release, so I don't really see any reason to create
the branch. There isn't anything to maintain.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] bringing back formal TC meetings

2018-10-05 Thread Doug Hellmann
Chris Dent  writes:

> On Thu, 4 Oct 2018, Doug Hellmann wrote:
>
>> TC members, please reply to this thread and indicate if you would find
>> meeting at 1300 UTC on the first Thursday of every month acceptable, and
>> of course include any other comments you might have (including alternate
>> times).
>
> +1
>
> Also, if we're going to set aside a time for a semi-formal meeting, I
> hope we will have some form of agenda and minutes, with a fairly
> clear process for setting that agenda as well as a process for

I had in mind "email the chair your topic suggestion" and then "the
chair emails the agenda to openstack-dev tagged [tc] a bit in advance of
the meeting". There would also probably be some standing topics, like
updates for ongoing projects.

Does that work for everyone?

Doug

> making sure that the fast and/or rude typers do not dominate the
> discussion during the meetings, as they used to back in the day when
> there were weekly meetings.
>
> The "raising hands" thing that came along towards the end sort of
> worked, so a variant on that may be sufficient.
>
> -- 
> Chris Dent   ٩◔̯◔۶   https://anticdent.org/
> freenode: cdent tw: @anticdent
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] bringing back formal TC meetings

2018-10-05 Thread Doug Hellmann
Thierry Carrez  writes:

> Ghanshyam Mann wrote:
>>    On Fri, 05 Oct 2018 02:47:53 +0900 Jeremy Stanley  
>> wrote 
>>   > On 2018-10-04 13:40:05 -0400 (-0400), Doug Hellmann wrote:
>>   > [...]
>>   > > TC members, please reply to this thread and indicate if you would
>>   > > find meeting at 1300 UTC on the first Thursday of every month
>>   > > acceptable, and of course include any other comments you might
>>   > > have (including alternate times).
>>   >
>>   > This time is acceptable to me. As long as we ensure that community
>>   > feedback continues more frequently in IRC and on the ML (for example
>>   > by making it clear that this meeting is expressly *not* for that)
>>   > then I'm fine with resuming formal meetings.
>> 
>> +1. Time works fine for me, Thanks for considering the APAC TZ.
>> 
>> I agree that we should keep encouraging the  usual discussion in existing 
>> office hours, IRC or ML. I will be definitely able to attend other 2 office 
>> hours (Tuesday  and Wednesday) which are suitable for my TZ.
>
> 1300 UTC is obviously good for me, but once we are off DST that will 
> mean 5am for our Pacific Time people (do we have any left ?).
>
> Maybe 1400 UTC would be a better trade-off?

Julia is out west, but I think not all the way to PST.

> Regarding frequency, I agree with mnaser that once per month might be 
> too rare. That means only 5-ish meetings for a given a 6-month 
> membership. But that can work if we use the meeting as a formal progress 
> status checkpoint, rather than a way to discuss complex topics.

I think we can definitely manage the agenda to minimize the number of
complex discussions. If that proves to be too hard, I wouldn't mind
meeting more often, but there does seem to be a lot of support for
preferring other venues for those conversations.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Open API 3.0 for OpenStack API

2018-10-05 Thread Doug Hellmann
Gilles Dubreuil  writes:

>> About the micro version, we discuss with your team mate dmitry in 
>> another email [1]
>
> Obviously micro version is a point of contention.
> My take on this is because consuming them has been proven harder than 
> developing them.
> The beauty of GraphQL is that there is no need to deal with version at all.
> New fields appears when needed and old one are marked deprecated.

How does someone using GraphQL to use a cloud know when a specific field
is available? How can they differentiate what is supported in one cloud
from what is supported in another, running a different version of the
same service?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Zuul job backlog

2018-10-05 Thread Doug Hellmann
Abhishek Kekane  writes:

> Hi Matt,
>
> Thanks for the input, I guess I should use '
> http://git.openstack.org/static/openstack.png' which will definitely work.
> Clark, Matt, Kindly let me know your opinion about the same.

That URL would not be on the local node running the test, and would
eventually exhibit the same problems. In fact we have seen issues
cloning git repositories as part of the tests in the past.

You need to use a localhost URL to ensure that the download doesn't have
to go off of the node. That may mean placing something into the directory
where Apache is serving files as part of the test setup.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][Searchlight] Always build universal wheels

2018-10-05 Thread Doug Hellmann
Trinh Nguyen  writes:

> Thank Jeremy, Doug for explaining.
>
> On Fri, Oct 5, 2018 at 6:54 AM Doug Hellmann  wrote:
>
>> Jeremy Stanley  writes:
>>
>> > On 2018-10-04 23:11:22 +0900 (+0900), Trinh Nguyen wrote:
>> > [...]
>> >> Please avoid adding universal wheels to the project setup.cfg.
>> > [...]
>> >
>> > Why would you avoid also adding it to the setup.cfg? The change you
>> > cited is merely to be able to continue building universal wheels for
>> > projects while the setup.cfg files are corrected over time, to
>> > reduce the urgency of doing so. Wheel building happens in more
>> > places than just our CI system, so only fixing it in CI is not a
>> > good long-term strategy.
>>
>> I abandoned a couple of dozen patches submitted today by someone who was
>> not coordinating with the goal champions with a message that said those
>> patches were not needed because I didn't want folks to be dealing with
>> this right now.
>>
>> Teams who want to update their setup.cfg can do so, but my intent is to
>> ensure it is not required in order to produce releases with the
>> automation in the short term.

I thought about this some more last night, looking for reasons not to
update all of the setup.cfg files. If this zuul migration project has
shown anything, it's that we need to continue to be creative with
finding ways to avoid touching every branch of every repository when we
have build system changes to make. I support decentralizing the job
management, but I think this is a case where we can avoid doing a bunch
of work, and so we should.

We've been saying we want to update the setup.cfg files to include the
setting to cause a universal wheel to build because we want the local
developer experience when building wheels to be the same as what we get
from the CI system when we publish packages. I don't think that's a real
requirement, though.

The default behavior of bdist_wheel is to create a version-specific
wheel, suitable for use with the version of python used to build it.
The universal flag makes a wheel file that can be used under either
python2 or python3.  Perhaps surprisingly, the contents of a universal
wheel are *exactly* the same as the contents of a version-specific
wheel. Literally the *only* difference is the filename, which includes
both versions so that pip will choose the universal file if no
version-specific file exists.

Therefore, for our CI system, we want to publish universal wheels to
PyPI because they are more usable to consumers installing from there
(including the CI system).

On the other hand, for local builds, there's no particular reason to
prefer a universal wheel over the version-specific format. If someone is
building their own wheels for internal consumption, they can easily
choose to keep the version-specific packages, or add the --universal
flag like we're doing in the CI job.

So, I think this all means we can leave the setup.cfg files as they are
and not worry about updating the wheel format flag.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Monasca agent problem

2018-10-05 Thread amal kammoun
Hello,

I have an issue with the monasca Agent.
In fact, I installed monasca with Openstack using devstack.
I want to minitor now the instances deployed using Openstack. For that I
installed on each instance the monasca agent with the following link:
https://github.com/openstack/monasca-agent/blob/master/docs/Agent.md
The problem is that I cannot define alarms for the concerned instance.
Example on my agent:
[image: image.png]

On my monitoring system I found the alam defintion but it is not activated.
also the instance on where the agent is running is not declared as a server
on the monasca servers list.

Regards,
Amal Kammoun.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][searchlight] What should I do with the missing releases?

2018-10-05 Thread Trinh Nguyen
Dear release team,

One thing comes up in my mind when preparing for the stein-1 release of
Searchlight that is what should we do with the missing releases (i.e.
Rocky)? Can I just ignore it or do I have to create a branch for it?

Thanks and regards,
-- 
*Trinh Nguyen*
*www.edlab.xyz *
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] bringing back formal TC meetings

2018-10-05 Thread Chris Dent

On Thu, 4 Oct 2018, Doug Hellmann wrote:


TC members, please reply to this thread and indicate if you would find
meeting at 1300 UTC on the first Thursday of every month acceptable, and
of course include any other comments you might have (including alternate
times).


+1

Also, if we're going to set aside a time for a semi-formal meeting, I
hope we will have some form of agenda and minutes, with a fairly
clear process for setting that agenda as well as a process for
making sure that the fast and/or rude typers do not dominate the
discussion during the meetings, as they used to back in the day when
there were weekly meetings.

The "raising hands" thing that came along towards the end sort of
worked, so a variant on that may be sufficient.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] bringing back formal TC meetings

2018-10-05 Thread Thierry Carrez

Ghanshyam Mann wrote:

   On Fri, 05 Oct 2018 02:47:53 +0900 Jeremy Stanley  
wrote 
  > On 2018-10-04 13:40:05 -0400 (-0400), Doug Hellmann wrote:
  > [...]
  > > TC members, please reply to this thread and indicate if you would
  > > find meeting at 1300 UTC on the first Thursday of every month
  > > acceptable, and of course include any other comments you might
  > > have (including alternate times).
  >
  > This time is acceptable to me. As long as we ensure that community
  > feedback continues more frequently in IRC and on the ML (for example
  > by making it clear that this meeting is expressly *not* for that)
  > then I'm fine with resuming formal meetings.

+1. Time works fine for me, Thanks for considering the APAC TZ.

I agree that we should keep encouraging the  usual discussion in existing 
office hours, IRC or ML. I will be definitely able to attend other 2 office 
hours (Tuesday  and Wednesday) which are suitable for my TZ.


1300 UTC is obviously good for me, but once we are off DST that will 
mean 5am for our Pacific Time people (do we have any left ?).


Maybe 1400 UTC would be a better trade-off?

Regarding frequency, I agree with mnaser that once per month might be 
too rare. That means only 5-ish meetings for a given a 6-month 
membership. But that can work if we use the meeting as a formal progress 
status checkpoint, rather than a way to discuss complex topics.


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] bringing back formal TC meetings

2018-10-05 Thread Ghanshyam Mann



  On Fri, 05 Oct 2018 02:47:53 +0900 Jeremy Stanley  
wrote  
 > On 2018-10-04 13:40:05 -0400 (-0400), Doug Hellmann wrote: 
 > [...] 
 > > TC members, please reply to this thread and indicate if you would 
 > > find meeting at 1300 UTC on the first Thursday of every month 
 > > acceptable, and of course include any other comments you might 
 > > have (including alternate times). 
 >  
 > This time is acceptable to me. As long as we ensure that community 
 > feedback continues more frequently in IRC and on the ML (for example 
 > by making it clear that this meeting is expressly *not* for that) 
 > then I'm fine with resuming formal meetings. 

+1. Time works fine for me, Thanks for considering the APAC TZ.

I agree that we should keep encouraging the  usual discussion in existing 
office hours, IRC or ML. I will be definitely able to attend other 2 office 
hours (Tuesday  and Wednesday) which are suitable for my TZ. 

-gmann

 > --  
 > Jeremy Stanley 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api] Open API 3.0 for OpenStack API

2018-10-05 Thread Gilles Dubreuil

Hi Edison,

Sorry for the delay.

Please see inline...

Cheers,
Gilles

On 07/09/18 12:03, Edison Xiang wrote:

Hey gilles,

Thanks your introduction for GraphQL and Relay.

> GraphQL and OpenAPI have a different feature scope and both have pros and 
cons.

I totally agree with you. They can work together.
Right now, I think we have no more work to adapt OpenStack APIs for 
Open API.
Firstly we could sort out Open API schemas base on the current 
OpenStack APIs.

and then we can discuss how to use it.


I think a big question is going to be about the effort required to bring 
OpenStack API to be Open API v3.0 compliant.
This is challenging because the various projects involved and the need 
to validate a new solution across all the projects.
The best approach is likely to first demonstrate a new solution is 
viable and then eventually bring it to be accepted globally.
Also because we don't have unlimited resources, I doubt we're going to 
be able to bring both Open API and GraphQL to the table(s).


There no doubts how OpenStack APIs can benefit from features such as 
schema definitions, self documentation and better performance especially 
if they are built-in or derived from a standard.
Meanwhile a practical example shows those features in action (for the 
skeptical) but also demonstrate how to do it which clarify the effort 
involved along with pros and cons.I want to make clear that I'm not 
against OpenAPI, I was actually keen to get it on board because of the 
benefits


And it will also helps compare solutions (Open API, GraphQL).

So, what do you think about an Open API proof of concept with Neutron?


About the micro version, we discuss with your team mate dmitry in 
another email [1]


Obviously micro version is a point of contention.
My take on this is because consuming them has been proven harder than 
developing them.

The beauty of GraphQL is that there is no need to deal with version at all.
New fields appears when needed and old one are marked deprecated.




[1] 
http://lists.openstack.org/pipermail/openstack-dev/2018-September/134202.html


Best Regards,
Edison Xiang

On Tue, Sep 4, 2018 at 8:37 AM Gilles Dubreuil > wrote:




On 30/08/18 13:56, Edison Xiang wrote:

Hi Ed Leafe,

Thanks your reply.
Open API defines a standard interface description for REST APIs.
Open API 3.0 can make a description(schema) for current OpenStack
REST API.
It will not change current OpenStack API.
I am not a GraphQL expert. I look up something about GraphQL.
In my understanding, GraphQL will get current OpenAPI together
and provide another APIs based on Relay,


Not sure what you mean here, could you please develop?



and Open API is used to describe REST APIs and GraphQL is used to
describe Relay APIs.


There is no such thing as "Relay APIs".
GraphQL povides a de-facto API Schema and Relay provides
extensions on top to facilitate re-fetching, paging and more.
GraphQL and OpenAPI have a different feature scope and both have
pros and cons.
GraphQL is delivering API without using REST verbs as all requests
are undone using POST and its data.
Beyond that what would be great (and it will ultimately come) is
to have both of them working together.

The idea of the GraphQL Proof of Concept is see what it can bring
and at what cost such as effort and trade-offs.
And to compare this against the effort to adapt OpenStack APIs to
use Open API.

BTW what's the status of Open API 3.0 in regards of Microversion?

Regards,
Gilles



Best Regards,
Edison Xiang

On Wed, Aug 29, 2018 at 9:33 PM Ed Leafe mailto:e...@leafe.com>> wrote:

On Aug 29, 2018, at 1:36 AM, Edison Xiang
mailto:xiang.edi...@gmail.com>> wrote:
>
> As we know, Open API 3.0 was released on July, 2017, it is
about one year ago.
> Open API 3.0 support some new features like anyof, oneof
and allof than Open API 2.0(Swagger 2.0).
> Now OpenStack projects do not support Open API.
> Also I found some old emails in the Mail List about
supporting Open API 2.0 in OpenStack.

There is currently an effort by some developers to
investigate the possibility of using GraphQL with OpenStack
APIs. What would Open API 3.0 provide that GraphQL would not?
I’m asking because I don’t know enough about Open API to
compare them.


-- Ed Leafe







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev