[openstack-dev] [devstack][python/pip][octavia] pip failure during octavia/ocata image build by devstack

2018-05-17 Thread rezroo
Hello - I'm trying to install a working local.conf devstack ocata on a 
new server, and some python packages have changed so I end up with this 
error during the build of octavia image:


   2018-05-18 01:00:26.276 |   Found existing installation: Jinja2 2.8
   2018-05-18 01:00:26.280 | Uninstalling Jinja2-2.8:
   2018-05-18 01:00:26.280 |   Successfully uninstalled Jinja2-2.8
   2018-05-18 01:00:26.839 |   Found existing installation: PyYAML 3.11
   2018-05-18 01:00:26.969 | Cannot uninstall 'PyYAML'. It is a
   distutils installed project and thus we cannot accurately determine
   which files belong to it which would lead to only a partial uninstall.

   2018-05-18 02:05:44.768 | Unmount
   /tmp/dib_build.2fbBBePD/mnt/var/cache/apt/archives
   2018-05-18 02:05:44.796 | Unmount /tmp/dib_build.2fbBBePD/mnt/tmp/pip
   2018-05-18 02:05:44.820 | Unmount
   /tmp/dib_build.2fbBBePD/mnt/tmp/in_target.d
   2018-05-18 02:05:44.844 | Unmount /tmp/dib_build.2fbBBePD/mnt/tmp/ccache
   2018-05-18 02:05:44.868 | Unmount /tmp/dib_build.2fbBBePD/mnt/sys
   2018-05-18 02:05:44.896 | Unmount /tmp/dib_build.2fbBBePD/mnt/proc
   2018-05-18 02:05:44.920 | Unmount /tmp/dib_build.2fbBBePD/mnt/dev/pts
   2018-05-18 02:05:44.947 | Unmount /tmp/dib_build.2fbBBePD/mnt/dev
   2018-05-18 02:05:50.668 |
   +/opt/stack/octavia/devstack/plugin.sh:build_octavia_worker_image:1
   exit_trap
   2018-05-18 02:05:50.679 | +./devstack/stack.sh:exit_trap:494
   local r=1
   2018-05-18 02:05:50.690 |
   ++./devstack/stack.sh:exit_trap:495 jobs -p
   2018-05-18 02:05:50.700 | +./devstack/stack.sh:exit_trap:495
   jobs=
   2018-05-18 02:05:50.710 | +./devstack/stack.sh:exit_trap:498
   [[ -n '' ]]
   2018-05-18 02:05:50.720 | +./devstack/stack.sh:exit_trap:504
   kill_spinner
   2018-05-18 02:05:50.731 | +./devstack/stack.sh:kill_spinner:390 
   '[' '!' -z '' ']'
   2018-05-18 02:05:50.741 | +./devstack/stack.sh:exit_trap:506
   [[ 1 -ne 0 ]]
   2018-05-18 02:05:50.751 | +./devstack/stack.sh:exit_trap:507
   echo 'Error on exit'
   2018-05-18 02:05:50.751 | Error on exit
   2018-05-18 02:05:50.761 | +./devstack/stack.sh:exit_trap:508
   generate-subunit 1526608058 1092 fail
   2018-05-18 02:05:51.148 | +./devstack/stack.sh:exit_trap:509
   [[ -z /tmp ]]
   2018-05-18 02:05:51.157 | +./devstack/stack.sh:exit_trap:512
   /home/stack/devstack/tools/worlddump.py -d /tmp

I've tried pip uninstalling PyYAML and pip installing it before running 
stack.sh, but the error comes back.


   $ sudo pip uninstall PyYAML
   The directory '/home/stack/.cache/pip/http' or its parent directory
   is not owned by the current user and the cache has been disabled.
   Please check the permissions and owner of that directory. If
   executing pip with sudo, you may want sudo's -H flag.
   Uninstalling PyYAML-3.12:
   /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/INSTALLER
   /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/METADATA
   /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/RECORD
   /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/WHEEL
   /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/top_level.txt
  /usr/local/lib/python2.7/dist-packages/_yaml.so
   Proceed (y/n)? y
  Successfully uninstalled PyYAML-3.12

I've posted my question to the pip folks and they think it's an 
openstack issue: https://github.com/pypa/pip/issues/4805


Is there a workaround here?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-20

2018-05-17 Thread Rochelle Grober

Thierry Carrez [mailto:thie...@openstack.org]
> 
> Graham Hayes wrote:
> > Any additional background on why we allowed LCOO to operate like this
> > would help a lot.
> 
The group was started back when OPNFV was first getting involved with 
OpenStack.  Many of the members came from that community.  They had a "vision" 
that the members would have to commit to provide developers to address the 
feature gaps the group was concerned with.  There was some interaction between 
them and the Product WG, and I at least attempted to get them to meet and talk 
with the Large Deployment Team(?) (an ops group that met at the Ops midcycles 
and discussed their issues, workarounds, gaps, etc.)

Are they still active?  Is anyone aware of any docs/code/bugfixes/features that 
came out of the group?

--Rocky

> We can't prevent any group of organizations to work in any way they prefer -
> - we can, however, deny them the right to be called an OpenStack
> workgroup if they fail at openly collaborating. We can raise the topic, but in
> the end it is a User Committee decision though, since the LCOO is a User
> Committee-blessed working group.
> 
> Source: https://governance.openstack.org/uc/
> 
> --
> Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Forum] [all] [Stable] OpenStack is "mature" -- time to get serious on Maintainers -- Session etherpad and food for thought for discussion

2018-05-17 Thread Rochelle Grober
Folks,

TL;DR
The last session related to extended releases is: OpenStack is "mature" -- time 
to get serious on Maintainers
It will be in room 220 at 11:00-11:40
The etherpad for the last session in the series on Extended releases is here:
https://etherpad.openstack.org/p/YVR-openstack-maintainers-maint-pt3

There are links to info on other communities’ maintainer 
process/role/responsibilities also, as reference material on how other have 
made it work (or not).

The nitty gritty details:

The upcoming Forum is filled with sessions that are focused on issues needed to 
improve and maintain the sustainability of OpenStack projects for the long 
term.  We have discussion on reducing technical debt, extended releases, fast 
forward installs, bringing Ops and User communities closer together, etc.  The 
community is showing it is now invested in activities that are often part of 
“Sustaining Engineering” teams (corporate speak) or “Maintainers (OSS speak).  
We are doing this; we are thinking about the moving parts to do this; let’s 
think about the contributors who want to do these and bring some clarity to 
their roles and the processes they need to be successful.  I am hoping you read 
this and keep these ideas in mind as you participate in the various Forum 
sessions.  Then you can bring the ideas generated during all these discussions 
to the Maintainers session near the end of the Summit to brainstorm how to 
visualize and define this new(ish) component of our technical community.

So, who has been doing the maintenance work so far?  Mostly (mostly) unsung 
heroes like the Stable Release team, Release team, Oslo team, project liaisons 
and the community goals champions (yes, moving to py3 is a 
sustaining/maintenance type of activity).  And some operators (Hi, mnaser!).  
We need to lean on their experience and what we think the community will need 
to reduce that technical debt to outline what the common tasks of maintainers 
should be, what else might fall in their purview, and how to partner with them 
to better serve them.

With API lower limits, new tool versions, placement, py3, and even projects 
reaching “code complete” or “maintenance mode,” there is a lot of work for 
maintainers to do (I really don’t like that term, but is there one that fits 
OpenStack’s community?).  It would be great if we could find a way to share the 
load such that we can have part time contributors here.  We know that operators 
know how to cherrypick, test in there clouds, do bug fixes.  How do we pair 
with them to get fixes upstreamed without requiring them to be full on 
developers?  We have a bunch of alumni who have stopped being “cores” and 
sometimes even developers, but who love our community and might be willing and 
able to put in a few hours a week, maybe reviewing small patches, providing 
help with user/ops submitted patch requests, or whatever.  They were trusted 
with +2 and +W in the past, so we should at least be able to trust they know 
what they know.  We  would need some way to identify them to Cores, since they 
would be sort of 1.5 on the voting scale, but……

So, burn out is high in other communities for maintainers.  We need to find a 
way to make sustaining the stable parts of OpenStack sustainable.

Hope you can make the talk, or add to the etherpad, or both.  The etherpad is 
very musch still a work in progress (trying to organize it to make sense).  If 
you want to jump in now, go for it, otherwise it should be in reasonable shape 
for use at the session.  I hope we get a good mix of community and a good 
collection of those who are already doing the job without title.

Thanks and see you next week.
--rocky




华为技术有限公司 Huawei Technologies Co., Ltd.
[Company_logo]
Rochelle Grober
Sr. Staff Architect, Open Source
Office Phone:408-330-5472
Email:rochelle.gro...@huawei.com

 本邮件及其附件含有华为公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁
止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中
的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件!
This e-mail and its attachments contain confidential information from HUAWEI, 
which
is intended only for the person or entity whose address is listed above. Any 
use of the
information contained herein in any way (including, but not limited to, total 
or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify 
the sender by
phone or email immediately and delete it!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api] late addition to forum schedule

2018-05-17 Thread Jeremy Stanley
On 2018-05-17 18:47:06 -0500 (-0500), Matt Riedemann wrote:
> On 5/17/2018 5:23 PM, Matt Riedemann wrote:
> > Not to troll too hard here, but it's kind of frustrating to see that
> > twitter trumps people actually proposing sessions on time and then
> > having them be rejected.
> 
> I reckon this is because there were already a pre-defined set of slots /
> rooms for Forum sessions and we had fewer sessions proposed than reserved
> slots, and that's why adding something in later is not a major issue?

Yes, as I understand it we still have some overflow space too if
planned forum sessions need continuing. Session leaders have
hopefully received details from the event planners on how to reserve
additional space in such situations. As far as I'm aware no proposed
Forum sessions were rejected this time around, and there was some
discussion among members of the TC (in #openstack-tc[*]) before it
was agreed there was room to squeeze this particular latecomer into
the lineup.

[*] 
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-05-14.log.html#t2018-05-14T17:27:05
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] [Upgrades] Cancel next IRC meeting (May 24th)

2018-05-17 Thread Luo, Lujin
Hi,

We are canceling our next Neutron Upgrades subteam meeting on May 24th, due to 
summit. 
We will resume on May 31st.

Thanks,
Lujin


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api] late addition to forum schedule

2018-05-17 Thread Matt Riedemann

On 5/17/2018 5:23 PM, Matt Riedemann wrote:
Not to troll too hard here, but it's kind of frustrating to see that 
twitter trumps people actually proposing sessions on time and then 
having them be rejected.


I reckon this is because there were already a pre-defined set of slots / 
rooms for Forum sessions and we had fewer sessions proposed than 
reserved slots, and that's why adding something in later is not a major 
issue?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] No team meeting during the summit week

2018-05-17 Thread Erno Kuvaja
As majority of the team is in Vancouver for the summit we will cancel
next weeks meeting (24th of May). Glance team will have next meeting
in IRC Thu 31st.

Thanks,
Erno "jokke" Kuvaja

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api] late addition to forum schedule

2018-05-17 Thread Matt Riedemann

On 5/17/2018 11:02 AM, Doug Hellmann wrote:

After some discussion on twitter and IRC, we've added a new session to
the Forum schedule for next week to discuss our options for cleaning up
some of the design/technical debt in our REST APIs.


Not to troll too hard here, but it's kind of frustrating to see that 
twitter trumps people actually proposing sessions on time and then 
having them be rejected.



The session description:

   The introduction of microversions in OpenStack APIs added a
   mechanism to incrementally change APIs without breaking users.
   We're now at the point where people would like to start making
   old things go away, which means we need to hammer out a plan and
   potentially put it forward as a community goal.

[1]https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21881/api-debt-cleanup


This also came up at the Pike PTG in Atlanta:

https://etherpad.openstack.org/p/ptg-architecture-workgroup

See the "raising the minimum microversion" section. The TODO was Ironic 
was going to go off and do this and see how much people freaked out. 
What's changed since then besides that not happening? Since I'm not on 
twitter, I don't know what new thing prompted this.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cyborg] [nova] Cyborg quotas

2018-05-17 Thread Matt Riedemann

On 5/17/2018 3:36 PM, Nadathur, Sundar wrote:
This applies only to the resources that Nova handles, IIUC, which does 
not handle accelerators. The generic method that Alex talks about is 
obviously preferable but, if that is not available in Rocky, is the 
filter an option?


If nova isn't creating accelerator resources managed by cyborg, I have 
no idea why nova would be doing quota checks on those types of 
resources. And no, I don't think adding a scheduler filter to nova for 
checking accelerator quota is something we'd add either. I'm not sure 
that would even make sense - the quota for the resource is per tenant, 
not per host is it? The scheduler filters work on a per-host basis.


Like any other resource in openstack, the project that manages that 
resource should be in charge of enforcing quota limits for it.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Automating documentation the tripleo way?

2018-05-17 Thread Zane Bitter

On 16/05/18 13:11, Ben Nemec wrote:



On 05/16/2018 10:39 AM, Petr Kovar wrote:

Hi all,

In the past few years, we've seen several efforts aimed at automating
procedural documentation, mostly centered around the OpenStack
installation guide. This idea to automatically produce and verify
installation steps or similar procedures was mentioned again at the last
Summit (https://etherpad.openstack.org/p/SYD-install-guide-testing).

It was brought to my attention that the tripleo team has been working on
automating some of the tripleo deployment procedures, using a Bash script
with included comment lines to supply some RST-formatted narrative, for
example:

https://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/overcloud-prep-images/templates/overcloud-prep-images.sh.j2 



The Bash script can then be converted to RST, e.g.:

https://thirdparty.logs.rdoproject.org/jenkins-tripleo-quickstart-queens-rdo_trunk-baremetal-dell_fc430_envB-single_nic_vlans-27/docs/build/ 



Source Code:

https://github.com/openstack/tripleo-quickstart-extras/tree/master/roles/collect-logs 



I really liked this approach and while I don't want to sound like selling
other people's work, I'm wondering if there is still an interest among 
the

broader OpenStack community in automating documentation like this?


I think it's worth noting that TripleO doesn't even use the generated 
docs.  The main reason is that we tried this back in the 
tripleo-incubator days and it was not the silver bullet for good docs 
that it appears to be on the surface.  As the deployment scripts grow 
features and more complicated logic it becomes increasingly difficult to 
write inline documentation that is readable.  In the end, the 
tripleo-incubator docs had a number of large bash snippets that referred 
to internal variables and such.  It wasn't actually good documentation.


FWIW in the early days of Heat I had an implementation that did this in 
the opposite direction: the script was extracted from the (rst) 
documentation, instead of extracting the documentation from the script. 
This is the way you need to do it to work around the kinds of concerns 
you mention. (Bash will try to execute literally anything that isn't a 
comment; rst makes it much easier to overload the meanings of different 
constructs.)


Basically how it worked was that everything that was indented by 4 
spaces in the rst file was extracted into the script - this could be a 
code block (which of course appeared as a code block in the 
documentation) or a comment block (which didn't). This enabled you to 
hide stuff that is boring but necessary to make the script work from the 
documentation. You could also do actual comments or code blocks that 
didn't appear in the script (e.g. for giving alternate implementations) 
by indenting only 2 spaces.


The actual extraction was done by this fun sed script:
http://git.openstack.org/cgit/openstack/heat/plain/tools/rst2script.sed?id=95e5ed067096ff52bbcd6c49146b74e1d59d2d3f

Here's the getting started guide we wrote for Heat using this:
http://git.openstack.org/cgit/openstack/heat/plain/docs/GettingStarted.rst?id=c0c1768e4a2b441ef286fb49c60419be3fe80786

In the end we didn't keep it around. I think mostly because we weren't 
able to actually run the script in the gate at the time (2012), and 
because after Heat support was added to devstack the getting started 
guide essentially reduced to 'use devstack' (did I mention it was 
2012?). So we didn't gain any long term experience in whether this is a 
good idea or not, although we did maintain it somewhat successfully for 
a year. But if you're going to try to do something similar then I'd 
recommend this method as a starting point.


cheers,
Zane.

When we moved to instack-undercloud to drive TripleO deployments we also 
moved to a more traditional hand-written docs repo.  Both options have 
their benefits and drawbacks, but neither absolves the development team 
of their responsibility to curate the docs.  IME the inline method 
actually makes it harder to do this because it tightly couples your code 
and docs in a very inflexible way.


/2 cents

-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cyborg] [nova] Cyborg quotas

2018-05-17 Thread Nadathur, Sundar

Hi all,
    Thanks for all the feedback. Please see below.

2018-05-17 1:24 GMT+08:00 Jay Pipes >:


   Placement already stores usage information for all allocations of
   resources. There is already even a /usages API endpoint that you can
   specify a project and/or user:

   https://developer.openstack.org/api-ref/placement/#list-usages
   

   I see no reason not to use it.

 This does not seem to be per-project (per-tenant). Given a tenant ID 
and a resource class, we want to get usages of that RC by that tenant. 
Please LMK if I misunderstood something.


As Matt mentioned, Nova does not handle accelerators and presumably 
would not handle quotas for them either.


On 5/16/2018 11:34 PM, Alex Xu wrote:

   2018-05-17 1:24 GMT+08:00 Jay Pipes >:

   []

   There is already actually a spec to use placement for quota
   usage checks in Nova here:

   https://review.openstack.org/#/c/509042/
   


   FYI, I'm working on a spec which append to that spec. It's about
   counting quota for the resource class(GPU, custom RC, etc) other
   than nova built-in resources(cores, ram). It should be able to count
   the resource classes which are used by cyborg. But yes, we probably
   should answer Matt's question first, whether we should let Nova
   count quota instead of Cyborg.


here is the line https://review.openstack.org/#/c/569011/


Alex, is this expected to be implemented by Rocky?



Probably best to have a look at that and see if it will end up
meeting your needs.

  * Cyborg provides a filter for the Nova scheduler, which
checks
    whether the project making the request has exceeded
its own quota.


Quota checks happen before Nova's scheduler gets involved, so
having a scheduler filter handle quota usage checking is
pretty much a non-starter.

This applies only to the resources that Nova handles, IIUC, which does 
not handle accelerators. The generic method that Alex talks about is 
obviously preferable but, if that is not available in Rocky, is the 
filter an option?



I'll have a look at the patches you've proposed and comment there.


Thanks!



Best,
-jay



Regards,
Sundar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] FYI on changes that might impact out of tree scheduler filters

2018-05-17 Thread Matt Riedemann
CERN has upgraded to Cells v2 and is doing performance testing of the 
scheduler and were reporting some things today which got us back to this 
bug [1]. So I've starting pushing some patches related to this but also 
related to an older blueprint I created [2]. In summary, we do quite a 
bit of DB work just to load up a list of instance objects per host that 
the in-tree filters don't even use.


The first change [3] is a simple optimization to avoid the default joins 
on the instance_info_caches and security_groups tables. If you have out 
of tree filters that, for whatever reason, rely on the 
HostState.instances objects to have info_cache or security_groups set, 
they'll continue to work, but will have to round-trip to the DB to 
lazy-load the fields, which is going to be a performance penalty on that 
filter. See the change for details.


The second change in the series [4] is more drastic in that we'll do 
away with pulling the full Instance object per host, which means only a 
select set of optional fields can be lazy-loaded [5], and the rest will 
result in an exception. The patch currently has a workaround config 
option to continue doing things the old way if you have out of tree 
filters that rely on this, but for good citizens with only in-tree 
filters, you will get a performance improvement during scheduling.


There are some other things we can do to optimize more of this flow, but 
this email is just about the ones that have patches up right now.


[1] https://bugs.launchpad.net/nova/+bug/1737465
[2] 
https://blueprints.launchpad.net/nova/+spec/put-host-manager-instance-info-on-a-diet

[3] https://review.openstack.org/#/c/569218/
[4] https://review.openstack.org/#/c/569247/
[5] 
https://github.com/openstack/nova/blob/de52fefa1fd52ccaac6807e5010c5f2a2dcbaab5/nova/objects/instance.py#L66


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [security sig] No meeting May 24th

2018-05-17 Thread Gage Hugo
Hello,

Due to members attending the OpenStack summit in Vancouver, we will be
canceling the Security SIG meeting on May 24th.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] manila operator's feedback forum etherpad available

2018-05-17 Thread Tom Barron
Next week at the Summit there is a forum session dedicated to Manila 
opertors' feedback on Thursday from 1:50-2:30pm [1] for which we have 
started an etherpad [2].  Please come and help manila developers do 
the right thing!  We're particularly interested in experiences running 
the OpenStack share service at scale and overcoming any obstacles to 
deployment but are interested in getting any and all feedback from 
real deployments so that we can tailor our development and maintenance 
efforts to real world needs.


Please feel free and encouraged to add to the etherpad starting now.

See you there!

-- Tom Barron
  Manila PTL
  irc: tbarron

[1] 
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21780/manila-ops-feedback-running-at-scale-overcoming-barriers-to-deployment
[2] https://etherpad.openstack.org/p/YVR18-manila-forum-ops-feedback

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila] no community meeting Thurs 24 May 2018

2018-05-17 Thread Tom Barron
There will be no Manila weekly meeting, Thursday May 24, given the 
Vancouver Summit is going on that week.


-- Tom Barron

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-sig/news

2018-05-17 Thread Michael McCune
Greetings OpenStack community,

Today's meeting was brief, primarily focused on planning for the
summit sessions[7][8] that the SIG will host and facilitate.

The first session[7], will be a Birds of a Feather (BoF) gathering
where the topics will be determined by the attendees. One topic that
will surely make that list is the GraphQL proof of concept for Neutron
that has been discussed on the mailing list[9].

The second session[8], will be a directed discussion addressing
technical debt in the REST APIs of OpenStack.  We're now at the point
where people would like to start removing old code. This session will
give interested parties details about how they can leverage
microversions and the guidelines of the SIG to reduce their debt, drop
old functionality, and improve the consistency of their APIs. It will
also clarify what it means when we bump the minimum microversion for a
service in the future and discuss plans for creating an OpenStack
community goal.

For both sessions, the SIG has aligned itself towards helping
coordinate discussions, clear up misunderstandings, and generally be
helpful in ensuring that all voices are heard and cross-cutting
concerns are addressed. If you are heading to summit, we hope to see
you there!

There being no recent changes to pending guidelines nor to bugs, we
ended the meeting early.

As always if you're interested in helping out, in addition to coming
to the meetings, there's also:

* The list of bugs [5] indicates several missing or incomplete guidelines.
* The existing guidelines [2] always need refreshing to account for
changes over time. If you find something that's not quite right,
submit a patch [6] to fix it.
* Have you done something for which you think guidance would have made
things easier but couldn't find any? Submit a patch and help others
[6].

# Newly Published Guidelines

None

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

None

# Guidelines Currently Under Review [3]

* Update parameter names in microversion sdk spec
  https://review.openstack.org/#/c/557773/

* Add API-schema guide (still being defined)
  https://review.openstack.org/#/c/524467/

* A (shrinking) suite of several documents about doing version and
service discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet
ready for review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API SIG about APIs
that you are developing or changing, please address your concerns in
an email to the OpenStack developer mailing list[1] with the tag
"[api]" in the subject. In your email, you should include any relevant
reviews, links, and comments to help guide the discussion of the
specific challenge you are facing.

To learn more about the API SIG mission and the work we do, see our
wiki page [4] and guidelines [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] https://wiki.openstack.org/wiki/API_SIG
[5] https://bugs.launchpad.net/openstack-api-wg
[6] https://git.openstack.org/cgit/openstack/api-wg
[7] 
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21798/api-special-interest-group-session
[8] 
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21881/api-debt-cleanup
[9] http://lists.openstack.org/pipermail/openstack-dev/2018-May/130219.html

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_sig/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Automating documentation the tripleo way?

2018-05-17 Thread Wesley Hayutin
On Thu, May 17, 2018 at 10:22 AM Petr Kovar  wrote:

> On Wed, 16 May 2018 13:26:46 -0600
> Wesley Hayutin  wrote:
>
> > On Wed, May 16, 2018 at 3:05 PM Doug Hellmann 
> wrote:
> >
> > > Excerpts from Wesley Hayutin's message of 2018-05-16 12:51:25 -0600:
> > > > On Wed, May 16, 2018 at 2:41 PM Doug Hellmann  >
> > > wrote:
> > > >
> > > > > Excerpts from Petr Kovar's message of 2018-05-16 17:39:14 +0200:
> > > > > > Hi all,
> > > > > >
> > > > > > In the past few years, we've seen several efforts aimed at
> automating
> > > > > > procedural documentation, mostly centered around the OpenStack
> > > > > > installation guide. This idea to automatically produce and verify
> > > > > > installation steps or similar procedures was mentioned again at
> the
> > > last
> > > > > > Summit (
> https://etherpad.openstack.org/p/SYD-install-guide-testing).
> > > > > >
> > > > > > It was brought to my attention that the tripleo team has been
> > > working on
> > > > > > automating some of the tripleo deployment procedures, using a
> Bash
> > > script
> > > > > > with included comment lines to supply some RST-formatted
> narrative,
> > > for
> > > > > > example:
> > > > > >
> > > > > >
> > > > >
> > >
> https://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/overcloud-prep-images/templates/overcloud-prep-images.sh.j2
> > > > > >
> > > > > > The Bash script can then be converted to RST, e.g.:
> > > > > >
> > > > > >
> > > > >
> > >
> https://thirdparty.logs.rdoproject.org/jenkins-tripleo-quickstart-queens-rdo_trunk-baremetal-dell_fc430_envB-single_nic_vlans-27/docs/build/
> > > > > >
> > > > > > Source Code:
> > > > > >
> > > > > >
> > > > >
> > >
> https://github.com/openstack/tripleo-quickstart-extras/tree/master/roles/collect-logs
> > > > > >
> > > > > > I really liked this approach and while I don't want to sound like
> > > selling
> > > > > > other people's work, I'm wondering if there is still an interest
> > > among
> > > > > the
> > > > > > broader OpenStack community in automating documentation like
> this?
> > > > > >
> > > > > > Thanks,
> > > > > > pk
> > > > > >
> > > > >
> > > > > Weren't the folks doing the training-labs or training-guides
> taking a
> > > > > similar approach? IIRC, they ended up implementing what amounted to
> > > > > their own installer for OpenStack, and then ended up with all of
> the
> > > > > associated upgrade and testing burden.
> > > > >
> > > > > I like the idea of trying to use some automation from this, but I
> > > wonder
> > > > > if we'd be better off extracting data from other tools, rather than
> > > > > building a new one.
> > > > >
> > > > > Doug
> > > > >
> > > >
> > > > So there really isn't anything new to create, the work is done and
> > > executed
> > > > on every tripleo change that runs in rdo-cloud.
> > >
> > > It wasn't clear what Petr was hoping to get. Deploying with TripleO is
> > > only one way to deploy, so we wouldn't be able to replace the current
> > > installation guides with the results of this work. It sounds like
> that's
> > > not the goal, though.
>
>
> Yes, I wasn't very clear on the goals as I didn't want to make too many
> assumptions before learning about technical details from other people.
> Ben's comments made me realize this approach would probably be best suited
> for generating documents such as quick start guides or tutorials that are
> procedural, yet they don't aim at describing multiple use cases.
>
>
> > > >
> > > > Instead of dismissing the idea upfront I'm more inclined to set an
> > > > achievable small step to see how well it works.  My thought would be
> to
> > > > focus on the upcoming all-in-one installer and the automated doc
> > > generated
> > > > with that workflow.  I'd like to target publishing the all-in-one
> tripleo
> > > > installer doc to [1] for Stein and of course a section of
> tripleo.org.
> > >
> > > As an official project, why is TripleO still publishing docs to its own
> > > site? That's not something we generally encourage.
> > >
> > > That said, publishing a new deployment guide based on this technique
> > > makes sense in general. What about Ben's comments elsewhere in the
> > > thread?
> > >
> >
> > I think Ben is referring to an older implementation and a slightly
> > different design but still has some points that we would want to be
> mindful
> > of.   I think this is a worthy effort to take another pass at this
> > regarless to be honest as we've found a good combination of interested
> > folks and sometimes the right people make all the difference.
> >
> > My personal opinion is that I'm not expecting the automated doc
> generation
> > to be upload ready to a doc server after each run.  I do expect it to do
> > 95% of the work, and to help keep the doc up to date with what is
> executed
> > in the latest releases of TripleO.
>
>
> Would it make sense to consider a bot automatically creating patches
> 

[openstack-dev] [neutron] [fwaas] Neutron FWaaS weekly team meeting cancelled on May 24.

2018-05-17 Thread Sridar Kandaswamy (skandasw)
Hi All:

With the Summit at Vancouver, we will cancel the FWaaS weekly meeting for May 
24 14:00 UTC. We will resume as usual from May 31.

Thanks

Sridar
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] late addition to forum schedule

2018-05-17 Thread Doug Hellmann
After some discussion on twitter and IRC, we've added a new session to
the Forum schedule for next week to discuss our options for cleaning up
some of the design/technical debt in our REST APIs. It's early days in
the conversation, but we wanted to take advantage of our time together
in person to brainstorm about how to do something like this. If you're
interested, please plan to attend the session on Wednesday at 4:40 [1].

The session description:

  The introduction of microversions in OpenStack APIs added a
  mechanism to incrementally change APIs without breaking users.
  We're now at the point where people would like to start making
  old things go away, which means we need to hammer out a plan and
  potentially put it forward as a community goal.

[1] 
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21881/api-debt-cleanup


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Bug deputy

2018-05-17 Thread Gary Kotton
Thanks!

From: James Anziano 
Reply-To: OpenStack List 
Date: Thursday, May 17, 2018 at 6:21 PM
To: OpenStack List 
Cc: OpenStack List 
Subject: Re: [openstack-dev] [neutron] Bug deputy

Hey Gary, my turn is coming up soon (week of June 4th), I can jump the line a 
bit and cover you if you or anyone can cover my currently assigned week.

Thanks,
 - James Anziano

- Original message -
From: Gary Kotton 
To: OpenStack List 
Cc:
Subject: [openstack-dev] [neutron] Bug deputy
Date: Thu, May 17, 2018 1:59 AM



Hi,

An urgent matter has come up this week. If possible, can someone please replace 
me.

Sorry

Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Bug deputy

2018-05-17 Thread James Anziano
Hey Gary, my turn is coming up soon (week of June 4th), I can jump the line a bit and cover you if you or anyone can cover my currently assigned week.
 
Thanks,
 - James Anziano
 
- Original message -From: Gary Kotton To: OpenStack List Cc:Subject: [openstack-dev] [neutron] Bug deputyDate: Thu, May 17, 2018 1:59 AM 
Hi,
An urgent matter has come up this week. If possible, can someone please replace me.
Sorry
Gary
__OpenStack Development Mailing List (not for usage questions)Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects

2018-05-17 Thread William M Edmonds


Doug Hellmann  wrote on 05/14/2018 08:52:08 AM:
>
... snip ...
>
> We still have about 50 open patches related to adding the
> lower-constraints test job. I'll keep those open until the third
> milestone of the Rocky development cycle, and then abandon the rest to
> clear my gerrit view so it is usable again.
>
> If you want to add lower-constraints tests to your project and have
> an open patch in the list [1], please take it over and fix the
> settings then approve the patch (the fix usually involves making
> the values in lower-constraints.txt match the values in the various
> requirements.txt files).
>
> If you don't want the job, please leave a comment on the patch to
> tell me and I will abandon it.
>
> Doug
>
> [1] https://review.openstack.org/#/q/topic:requirements-stop-syncing
+status:open

I believe we're stuck for nova-powervm [1] and ceilometer-powervm [2]
until/unless nova and ceilometer, respectively, post releases to pypi. Is
anyone working on that?

Even then, I don't love what we've had to do to get this working for
networking-powervm [3][4], which is what we'd do for nova-powervm and
ceilometer-powervm as well once they're on pypi. When you consider master,
it's a really nasty hack (including a non-master version in
requirements.txt because obviously master can't be on pypi). It's better
than not testing, but if someone has a better idea...

And I'd appreciate -infra reviews on [4] since I have no idea how to ensure
that's doing what it's intended to do.

[1] https://review.openstack.org/#/c/555964/
[2] https://review.openstack.org/#/c/555358/
[3] https://review.openstack.org/#/c/555936/
[4] https://review.openstack.org/#/c/569104/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Style guide for OpenStack documentation

2018-05-17 Thread Jeremy Stanley
On 2018-05-17 16:35:36 +0200 (+0200), Petr Kovar wrote:
> On Wed, 16 May 2018 17:05:15 +
> Jeremy Stanley  wrote:
> 
> > On 2018-05-16 18:24:45 +0200 (+0200), Petr Kovar wrote:
> > [...]
> > > I'd like to propose replacing the reference to the IBM Style Guide
> > > with a reference to the developerWorks editorial style guide
> > > (https://www.ibm.com/developerworks/library/styleguidelines/).
> > > This lightweight version comes from the same company and is based
> > > on the same guidelines, but most importantly, it is available for
> > > free.
> > [...]
> > 
> > I suppose replacing a style guide nobody can access with one
> > everyone can (modulo legal concerns) is a step up. Still, are there
> > no style guides published under an actual free/open license? If
> > https://www.ibm.com/developerworks/community/terms/use/ is correct
> > then even accidental creation of a derivative work might be
> > prosecuted as copyright infringement.
> 
> 
> We don't really plan on reusing content from that site, just referring to
> it, so is it a concern?
[...]

A style guide is a tool. Free and open collaboration needs free
(libre, not merely gratis) tools, and that doesn't just mean
software. If, down the road, you want an OpenStack Documentation
Style Guide which covers OpenStack-specific concerns to quote or
transclude information from a more thorough guide, that becomes a
derivative work and is subject to the licensing terms for the guide
from which you're copying.

There are a lot of other parallels between writing software and
writing prose here beyond mere intellectual property concerns too.
Saying that OpenStack Documentation is free and open, but then
endorsing an effectively proprietary guide as something its authors
should read and follow, sends a mixed message as to our position on
open documentation (as a style guide is of course also documentation
in its own right). On the other hand, recommending use of a style
guide which is available under a free/libre open source license or
within the public domain resonates with our ideals and principles as
a community, serving only to strengthen our position on openness in
all its endeavors (including documentation).
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Style guide for OpenStack documentation

2018-05-17 Thread Petr Kovar
On Wed, 16 May 2018 17:05:15 +
Jeremy Stanley  wrote:

> On 2018-05-16 18:24:45 +0200 (+0200), Petr Kovar wrote:
> [...]
> > I'd like to propose replacing the reference to the IBM Style Guide
> > with a reference to the developerWorks editorial style guide
> > (https://www.ibm.com/developerworks/library/styleguidelines/).
> > This lightweight version comes from the same company and is based
> > on the same guidelines, but most importantly, it is available for
> > free.
> [...]
> 
> I suppose replacing a style guide nobody can access with one
> everyone can (modulo legal concerns) is a step up. Still, are there
> no style guides published under an actual free/open license? If
> https://www.ibm.com/developerworks/community/terms/use/ is correct
> then even accidental creation of a derivative work might be
> prosecuted as copyright infringement.


We don't really plan on reusing content from that site, just referring to
it, so is it a concern?

 
> http://www.writethedocs.org/guide/writing/style-guides/#selecting-a-good-style-guide-for-you
> mentions some more aligned with our community's open ideals, such as
> the 18F Content Guide (public domain), SUSE Documentation Style
> Guide (GFDL), GNOME Documentation Style Guide (GFDL), and the
> Writing Style Guide and Preferred Usage for DOD Issuances (public
> domain). Granted adopting one of those might lead to a need to
> overhaul some aspects of style in existing documents, so I can
> understand it's not a choice to be made lightly. Still, we should
> always consider embracing open process, and that includes using
> guidelines which we can freely derive and republish as needed.


I would be interested in hearing what other people think about that, but I
would strongly prefer to stick with the existing "publisher" as that creates
fewer issues than switching to a completely different style guide and
then having to adjust our guidelines based on the IBM guide, etc.

Thanks,
pk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Automating documentation the tripleo way?

2018-05-17 Thread Petr Kovar
On Wed, 16 May 2018 13:26:46 -0600
Wesley Hayutin  wrote:

> On Wed, May 16, 2018 at 3:05 PM Doug Hellmann  wrote:
> 
> > Excerpts from Wesley Hayutin's message of 2018-05-16 12:51:25 -0600:
> > > On Wed, May 16, 2018 at 2:41 PM Doug Hellmann 
> > wrote:
> > >
> > > > Excerpts from Petr Kovar's message of 2018-05-16 17:39:14 +0200:
> > > > > Hi all,
> > > > >
> > > > > In the past few years, we've seen several efforts aimed at automating
> > > > > procedural documentation, mostly centered around the OpenStack
> > > > > installation guide. This idea to automatically produce and verify
> > > > > installation steps or similar procedures was mentioned again at the
> > last
> > > > > Summit (https://etherpad.openstack.org/p/SYD-install-guide-testing).
> > > > >
> > > > > It was brought to my attention that the tripleo team has been
> > working on
> > > > > automating some of the tripleo deployment procedures, using a Bash
> > script
> > > > > with included comment lines to supply some RST-formatted narrative,
> > for
> > > > > example:
> > > > >
> > > > >
> > > >
> > https://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/overcloud-prep-images/templates/overcloud-prep-images.sh.j2
> > > > >
> > > > > The Bash script can then be converted to RST, e.g.:
> > > > >
> > > > >
> > > >
> > https://thirdparty.logs.rdoproject.org/jenkins-tripleo-quickstart-queens-rdo_trunk-baremetal-dell_fc430_envB-single_nic_vlans-27/docs/build/
> > > > >
> > > > > Source Code:
> > > > >
> > > > >
> > > >
> > https://github.com/openstack/tripleo-quickstart-extras/tree/master/roles/collect-logs
> > > > >
> > > > > I really liked this approach and while I don't want to sound like
> > selling
> > > > > other people's work, I'm wondering if there is still an interest
> > among
> > > > the
> > > > > broader OpenStack community in automating documentation like this?
> > > > >
> > > > > Thanks,
> > > > > pk
> > > > >
> > > >
> > > > Weren't the folks doing the training-labs or training-guides taking a
> > > > similar approach? IIRC, they ended up implementing what amounted to
> > > > their own installer for OpenStack, and then ended up with all of the
> > > > associated upgrade and testing burden.
> > > >
> > > > I like the idea of trying to use some automation from this, but I
> > wonder
> > > > if we'd be better off extracting data from other tools, rather than
> > > > building a new one.
> > > >
> > > > Doug
> > > >
> > >
> > > So there really isn't anything new to create, the work is done and
> > executed
> > > on every tripleo change that runs in rdo-cloud.
> >
> > It wasn't clear what Petr was hoping to get. Deploying with TripleO is
> > only one way to deploy, so we wouldn't be able to replace the current
> > installation guides with the results of this work. It sounds like that's
> > not the goal, though.


Yes, I wasn't very clear on the goals as I didn't want to make too many
assumptions before learning about technical details from other people.
Ben's comments made me realize this approach would probably be best suited
for generating documents such as quick start guides or tutorials that are
procedural, yet they don't aim at describing multiple use cases.


> > >
> > > Instead of dismissing the idea upfront I'm more inclined to set an
> > > achievable small step to see how well it works.  My thought would be to
> > > focus on the upcoming all-in-one installer and the automated doc
> > generated
> > > with that workflow.  I'd like to target publishing the all-in-one tripleo
> > > installer doc to [1] for Stein and of course a section of tripleo.org.
> >
> > As an official project, why is TripleO still publishing docs to its own
> > site? That's not something we generally encourage.
> >
> > That said, publishing a new deployment guide based on this technique
> > makes sense in general. What about Ben's comments elsewhere in the
> > thread?
> >
> 
> I think Ben is referring to an older implementation and a slightly
> different design but still has some points that we would want to be mindful
> of.   I think this is a worthy effort to take another pass at this
> regarless to be honest as we've found a good combination of interested
> folks and sometimes the right people make all the difference.
> 
> My personal opinion is that I'm not expecting the automated doc generation
> to be upload ready to a doc server after each run.  I do expect it to do
> 95% of the work, and to help keep the doc up to date with what is executed
> in the latest releases of TripleO.


Would it make sense to consider a bot automatically creating patches
with content updates that would be then curated and reviewed by the docs
contributors?


>  Also noting the doc used is a mixture
> of static and generated documentation which I think worked out quite well
> in order to not soley rely on what is executed in ci.
> 
> So again, my thought is to create a small achievable goal and 

[openstack-dev] [release] Release countdown for week R-14 and R-13, May 21 - June 1

2018-05-17 Thread Sean McGinnis
Here is the countdown content for the next two weeks, to cover while the Summit
takes place.

Development Focus
-

Work on new features should be well underway. The Rocky-2 milestone is coming
up quick.

Hopefully teams have good representation attending the Forum. This is a great
opportunity for getting feedback on existing and planned features and bringing
that feedback to the teams.

*Note* With the Summit/Forum taking place next week, the release team will not
be processing any normal release requests. Please ping us directly if something
comes up that cannot wait, but with the many distractions of the event, we want
to avoid releasing anything that could cause problems and require the attention
of those otherwise engaged.

General Information
---

Membership freeze coincides with milestone 2 [0]. This means projects that have
not done a release yet must do so for the next two milestones to be included in
the Queens release.

[0] https://releases.openstack.org/rocky/schedule.html#r-mf

In case you missed it last week, some projects still need to respond to the
mutable config [1] and mox removal [2] Rocky series goals. Just a reminder that
teams should respond to these goals, even if they do not trigger any work for
your specific project.

[1] https://storyboard.openstack.org/#!/story/2001545
[2] https://storyboard.openstack.org/#!/story/2001546

Upcoming Deadlines & Dates
--

Forum at OpenStack Summit in Vancouver: May 21-24
Rocky-2 Milestone: June 7

-- 
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Cinder] Need suggestion to add split-logger functionality in cinder

2018-05-17 Thread Alhat, Neha
Hi All,

Problem description:

Monty Taylor added split-logger functionality in patch[1].
This functionality splits logs in four different logs:

* keystoneauth.session.request

* keystoneauth.session.response

* keystoneauth.session.body

* keystoneauth.session.request-id

Working on enabling this split_logger functionality for cinder to its internal 
clients interaction (like glanceclient, keystoneclient, novaclient).

Steps followed to enable this functionality:

1. Register the configuration option 'split_loggers' in keystoneauth [2].

2. After registering 'split_loggers' option in keystoneauth for cinder to 
novaclient interaction, need to set value of 'split_loggers=True/False' under 
[nova] section of cinder.conf, so that this 'split_loggers' value will get 
loaded from [nova] section at the time of loading session [3].

So trying same approach for cinder to glanceclient interaction, but there is 
one difference:
glanceclient uses 'load_from_options' method [4] while loading 
session
And novaclient is using 'load_session_from_conf_options' 
method[3] from keystoneauth.

Impact:

For this need to register the session conf options from keystoneauth under 
[glance] section which earlier was under [default] section.
Please refer changes made for this[5].

Pros of using this approach:


1.   As we are setting conf option in keystoneauth it will directly load 
value of 'split_loggers' from conf file.
So no need to pass explicitly 'split_loggers' value at the time of loading 
keystoneauth session.

Cons of using this approch:

1.   Need to register session conf options under [glance] section instead 
of [default].

Need opinion on this as it is changing the group from [default] to [glance] for 
session conf options.

[1]: https://review.openstack.org/#/c/505764/
[2]: http://paste.openstack.org/show/721071/
[3]: https://github.com/openstack/cinder/blob/master/cinder/compute/nova.py#L112
[4]: https://github.com/openstack/cinder/blob/master/cinder/image/glance.py#L104
[5]: http://paste.openstack.org/show/721095/

Disclaimer: This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged,confidential, 
and proprietary data. If you are not the intended recipient,please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [python3] flake8 and pycodestyle W60x warnings

2018-05-17 Thread Jeremy Stanley
On 2018-05-17 16:09:12 +0900 (+0900), IWAMOTO Toshihiro wrote:
[...]
> OpenStack CI's flake8 is pre pycodestyle age,

It's not "OpenStack CI's flake8" version. Nova's master branch is
getting flake8 transitively through its test-requirement on
hacking!=0.13.0,<0.14,>=0.12.0 which is causing it to select
hacking==0.12.0 (the only version between 0.12.0 and 0.14.0 is
0.13.0 which is explicitly skipped). In turn, that version of
hacking declares a requirement on flake8<2.6.0,>=2.5.4 which is
causing it to use flake8==2.5.5. As you noted, that depends on
pep8!=1.6.0,!=1.6.1,!=1.6.2,>=1.5.7 so pep8==1.7.1 gets used.

> flake8 version isn't managed by g-r, but that's another story.
[...]

The reason we don't globally-constrain hacking, flake8 or other
static analyzers is that projects are going to want to comply with
new rules at their own individual paces; it's up to the Nova team to
decide when to move their master branch testing to new versions of
these. Per the example above if they upped their hacking cap to <1.2
they would get hacking==1.1.0 (the latest release) which would
install flake8==2.6.2 and so pycodestyle==2.0.0.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] No meeting on May 23rd

2018-05-17 Thread Ivan Kolodyazhny
Hi all,

Some of the team will be attending the OpenStack summit in Vancouver,
so I am canceling the weekly IRC meeting for the 23rd.


Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] [magnum] Magnum tempest fails with 400 bad request

2018-05-17 Thread Thomas Goirand
On 05/17/2018 09:49 AM, Tobias Urdin wrote:
> Hello,
> 
> I was interested in getting Magnum working in gate by getting @dms patch
> fixed and merged [1].
> 
> The installation goes fine on Ubuntu and CentOS however the tempest
> testing for Magnum fails on CentOS (it not available in Ubuntu).
> 
> 
> It seems to be related to authentication against keystone but I don't
> understand why, please see logs [2] [3]
> 
> 
> [1] https://review.openstack.org/#/c/367012/
> 
> [2]
> http://logs.openstack.org/12/367012/28/check/puppet-openstack-integration-4-scenario003-tempest-centos-7/3f5252b/logs/magnum/magnum-api.txt.gz#_2018-05-16_15_10_36_010
> 
> [3]
> http://logs.openstack.org/12/367012/28/check/puppet-openstack-integration-4-scenario003-tempest-centos-7/3f5252b/

From that log, you're getting a 404 from nova-api.

Response - Headers: {'status': '404', u'content-length': '113',
'content-location': 'https://[::1]:8774/v2.1/os-keypairs/default',
u'x-compute-request-id': 'req-35ae4651-186c-4f20-9143-f68f67b7d401',
u'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version',
u'server': 'Apache/2.4.6 (CentOS)', u'openstack-api-version': 'compute
2.1', u'connection': 'close', u'x-openstack-nova-api-version': '2.1',
u'date': 'Wed, 16 May 2018 15:10:33 GMT', u'content-type':
'application/json; charset=UTF-8', u'x-openstack-request-id':
'req-35ae4651-186c-4f20-9143-f68f67b7d401'}

but that seems fine because the request right after is working, however
just right after, you're getting a 500 error on magnum-api a bit further:

Response - Headers: {'status': '500', u'content-length': '149',
'content-location': 'https://[::1]:9511/clustertemplates',
u'openstack-api-maximum-version': 'container-infra 1.6', u'vary':
'OpenStack-API-Version', u'openstack-api-minimum-version':
'container-infra 1.1', u'server': 'Werkzeug/0.11.6 Python/2.7.5',
u'openstack-api-version': 'container-infra 1.1', u'date': 'Wed, 16 May
2018 15:10:36 GMT', u'content-type': 'application/json',
u'x-openstack-request-id': 'req-12c635c9-889a-48b4-91d4-ded51220ad64'}

With this body:

Body: {"errors": [{"status": 500, "code": "server", "links": [],
"title": "Bad Request (HTTP 400)", "detail": "Bad Request (HTTP 400)",
"request_id": ""}]}
2018-05-16 15:24:14.434432 | centos-7 | 2018-05-16 15:10:36,016
13619 DEBUG[tempest.lib.common.dynamic_creds] Clearing network:
{u'provider:physical_network': None, u'ipv6_address_scope': None,
u'revision_number': 2, u'port_security_enabled': True, u'mtu': 1400,
u'id': u'c26c237a-0583-4f72-8300-f87051080be7', u'router:external':
False, u'availability_zone_hints': [], u'availability_zones': [],
u'provider:segmentation_id': 35, u'ipv4_address_scope': None, u'shared':
False, u'project_id': u'31c5c1fbc46e4880b7e498e493700a50', u'status':
u'ACTIVE', u'subnets': [], u'description': u'', u'tags': [],
u'updated_at': u'2018-05-16T15:10:26Z', u'is_default': False,
u'qos_policy_id': None, u'name': u'tempest-setUp-2113966350-network',
u'admin_state_up': True, u'tenant_id':
u'31c5c1fbc46e4880b7e498e493700a50', u'created_at':
u'2018-05-16T15:10:26Z', u'provider:network_type': u'vxlan'}, subnet:
{u'service_types': [], u'description': u'', u'enable_dhcp': True,
u'tags': [], u'network_id': u'c26c237a-0583-4f72-8300-f87051080be7',
u'tenant_id': u'31c5c1fbc46e4880b7e498e493700a50', u'created_at':
u'2018-05-16T15:10:26Z', u'dns_nameservers': [], u'updated_at':
u'2018-05-16T15:10:26Z', u'ipv6_ra_mode': None, u'allocation_pools':
[{u'start': u'10.100.0.2', u'end': u'10.100.0.14'}], u'gateway_ip':
u'10.100.0.1', u'revision_number': 0, u'ipv6_address_mode': None,
u'ip_version': 4, u'host_routes': [], u'cidr': u'10.100.0.0/28',
u'project_id': u'31c5c1fbc46e4880b7e498e493700a50', u'id':
u'a7233852-e3f1-4129-b34e-c607aef5172e', u'subnetpool_id': None,
u'name': u'tempest-setUp-2113966350-subnet'}, router: {u'status':
u'ACTIVE', u'external_gateway_info': {u'network_id':
u'c6cf6d80-fcbb-46e6-aefd-17f41b5c57b1', u'enable_snat': True,
u'external_fixed_ips': [{u'subnet_id':
u'34e589e9-86d2-4f72-a0c3-7990406561b1', u'ip_address':
u'172.24.5.13'}]}, u'availability_zone_hints': [],
u'availability_zones': [], u'description': u'', u'tags': [],
u'tenant_id': u'31c5c1fbc46e4880b7e498e493700a50', u'created_at':
u'2018-05-16T15:10:27Z', u'admin_state_up': True, u'distributed': False,
u'updated_at': u'2018-05-16T15:10:29Z', u'ha': False, u'flavor_id':
None, u'revision_number': 2, u'routes': [], u'project_id':
u'31c5c1fbc46e4880b7e498e493700a50', u'id':
u'bdf13d72-c19c-4ad1-b57d-ed6da9c569b3', u'name':
u'tempest-setUp-2113966350-router'}

And right after that, we can only see clean-up calls (removing routers,
DELETE calls, etc.).

Looking at the magnum-api log shows issues in glanceclient just right
before the 500 error.

So, something's probably going on there, with a bad glanceclient
request. Having a look into magnum.conf doesn't show anything suspicious
concerning [glance_client] though, so I went to look into tempest.conf.
And there, it shows no 

[openstack-dev] Canceling QA office hour for next week

2018-05-17 Thread Ghanshyam Mann
Hi All,

As most of us will be in Vancouver summit next week, i am canceling
the next week QA office hour (24th May, Thursday).

We will resume the same after summit which will be on 31st May, Thursday.

-gmann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Automating documentation the tripleo way?

2018-05-17 Thread Roger Luethi

On 16.05.18 20:40, Doug Hellmann wrote:

Weren't the folks doing the training-labs or training-guides taking a
similar approach? IIRC, they ended up implementing what amounted to
their own installer for OpenStack, and then ended up with all of the
associated upgrade and testing burden.


training-labs uses its own installer because the project goal is to do 
the full deployment (that is, including the creation of appropriate VMs) 
in an automated fashion on all supported platforms (Linux, macOS, 
Windows). The scripts that are injected into the VMs follow the 
install-guide as closely as possible. We were pretty close to automating 
the translation from install-guide docs to shell scripts, but some 
issues remained (e.g., some scripts need guards waiting for services to 
come up in order to avoid race conditions; this is not documented in the 
install-guide).


Roger

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][Forum] QA onboarding session in Vancouver

2018-05-17 Thread Ghanshyam Mann
Hi All,

QA team is planning an onboarding session during the Vancouver Summit:

- 
https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21646/qa-project-onboarding
- Tuesday, May 22, 2018, 9:50am-10:30am
- Vancouver Convention Centre West - Level Two - Room 223

Details of this sessions is added in this etherpad [1].

Apart from what written in etherpad, this session will be more open
and anyone can bring up the topic related to QA. Attendees can
interact with the QA developers in term of help they need or want to
help QA.

Have a safe flight and looking forward to meet you there !

..1 https://etherpad.openstack.org/p/YVR18-forum-qa-onboarding-vancouver

-QA Team

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [barbican] [tc] key store in base services

2018-05-17 Thread Cédric Jeanneret


On 05/17/2018 10:18 AM, Bogdan Dobrelya wrote:
> On 5/17/18 9:58 AM, Thierry Carrez wrote:
>> Jeremy Stanley wrote:
>>> [...]
>>> As a community, we're likely to continue to make imbalanced
>>> trade-offs against relevant security features if we don't move
>>> forward and declare that some sort of standardized key storage
>>> solution is a fundamental component on which OpenStack services can
>>> rely. Being able to just assume that you can encrypt volumes in
>>> Swift, even as a means to further secure a TripleO undercloud, would
>>> be a step in the right direction for security-minded deployments.
>>>
>>> Unfortunately, I'm unable to find any follow-up summary on the
>>> mailing list from the aforementioned session, but recollection from
>>> those who were present (I had a schedule conflict at that time) was
>>> that a Castellan-compatible key store would at least be a candidate
>>> for inclusion in our base services list:
>>>
>>> https://governance.openstack.org/tc/reference/base-services.html
>>
>> Yes, last time this was discussed, there was lazy consensus that
>> adding "a Castellan-compatible secret store" would be a good addition
>> to the base services list if we wanted to avoid proliferation of
>> half-baked keystore implementations in various components.
>>
>> The two blockers were:
>>
>> 1/ castellan had to be made less Barbican-specific, offer at least one
>> other secrets store (Vault), and move under Oslo (done)
> 
> Back to the subject and tripleo underclouds running Barbican, using
> vault as a backend may be a good option, given that openshift supports
> [0] it as well for storing k8s secrets, and kubespray does [1] for
> vanilla k8s deployments, and that we have openshift/k8s-based control
> plane for openstack on the integration roadmap. So we'll highly likely
> end up running Barbican/Vault on undercloud anyway.
> 
> [0]
> https://blog.openshift.com/managing-secrets-openshift-vault-integration/
> [1]
> https://github.com/kubernetes-incubator/kubespray/blob/master/docs/vault.md
> 

That just sounds lovely, especially since this allows to converge
"secure storage" tech between projects.
On my own, I was considering some secure storage (custodia) in the
context of the public TLS certificate storage/update/provisioning.
Having by default a native way to store secrets used by the overcloud
deploy/life is a really good thing, and will prevent leaks, having
ardcoded passwords in files and so on (although, yeah, you'll need
something to access barbican ;)).

>>
>> 2/ some projects (was it Designate ? Octavia ?) were relying on
>> advanced functions of Barbican not generally found in other secrets
>> store, like certificate generation, and so would prefer to depend on
>> Barbican itself, which confuses the messaging around the base service
>> addition a bit ("any Castellan-supported secret store as long as it's
>> Barbican")
>>
> 
> 

-- 
Cédric Jeanneret
Software Engineer
DFG:DF



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [barbican] [tc] key store in base services

2018-05-17 Thread Bogdan Dobrelya

On 5/17/18 9:58 AM, Thierry Carrez wrote:

Jeremy Stanley wrote:

[...]
As a community, we're likely to continue to make imbalanced
trade-offs against relevant security features if we don't move
forward and declare that some sort of standardized key storage
solution is a fundamental component on which OpenStack services can
rely. Being able to just assume that you can encrypt volumes in
Swift, even as a means to further secure a TripleO undercloud, would
be a step in the right direction for security-minded deployments.

Unfortunately, I'm unable to find any follow-up summary on the
mailing list from the aforementioned session, but recollection from
those who were present (I had a schedule conflict at that time) was
that a Castellan-compatible key store would at least be a candidate
for inclusion in our base services list:

https://governance.openstack.org/tc/reference/base-services.html


Yes, last time this was discussed, there was lazy consensus that adding 
"a Castellan-compatible secret store" would be a good addition to the 
base services list if we wanted to avoid proliferation of half-baked 
keystore implementations in various components.


The two blockers were:

1/ castellan had to be made less Barbican-specific, offer at least one 
other secrets store (Vault), and move under Oslo (done)


Back to the subject and tripleo underclouds running Barbican, using 
vault as a backend may be a good option, given that openshift supports 
[0] it as well for storing k8s secrets, and kubespray does [1] for 
vanilla k8s deployments, and that we have openshift/k8s-based control 
plane for openstack on the integration roadmap. So we'll highly likely 
end up running Barbican/Vault on undercloud anyway.


[0] https://blog.openshift.com/managing-secrets-openshift-vault-integration/
[1] 
https://github.com/kubernetes-incubator/kubespray/blob/master/docs/vault.md




2/ some projects (was it Designate ? Octavia ?) were relying on advanced 
functions of Barbican not generally found in other secrets store, like 
certificate generation, and so would prefer to depend on Barbican 
itself, which confuses the messaging around the base service addition a 
bit ("any Castellan-supported secret store as long as it's Barbican")





--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [barbican] [tc] key store in base services

2018-05-17 Thread Thierry Carrez

Jeremy Stanley wrote:

[...]
As a community, we're likely to continue to make imbalanced
trade-offs against relevant security features if we don't move
forward and declare that some sort of standardized key storage
solution is a fundamental component on which OpenStack services can
rely. Being able to just assume that you can encrypt volumes in
Swift, even as a means to further secure a TripleO undercloud, would
be a step in the right direction for security-minded deployments.

Unfortunately, I'm unable to find any follow-up summary on the
mailing list from the aforementioned session, but recollection from
those who were present (I had a schedule conflict at that time) was
that a Castellan-compatible key store would at least be a candidate
for inclusion in our base services list:

https://governance.openstack.org/tc/reference/base-services.html


Yes, last time this was discussed, there was lazy consensus that adding 
"a Castellan-compatible secret store" would be a good addition to the 
base services list if we wanted to avoid proliferation of half-baked 
keystore implementations in various components.


The two blockers were:

1/ castellan had to be made less Barbican-specific, offer at least one 
other secrets store (Vault), and move under Oslo (done)


2/ some projects (was it Designate ? Octavia ?) were relying on advanced 
functions of Barbican not generally found in other secrets store, like 
certificate generation, and so would prefer to depend on Barbican 
itself, which confuses the messaging around the base service addition a 
bit ("any Castellan-supported secret store as long as it's Barbican")


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] [magnum] Magnum tempest fails with 400 bad request

2018-05-17 Thread Tobias Urdin
Hello,

I was interested in getting Magnum working in gate by getting @dms patch
fixed and merged [1].

The installation goes fine on Ubuntu and CentOS however the tempest
testing for Magnum fails on CentOS (it not available in Ubuntu).


It seems to be related to authentication against keystone but I don't
understand why, please see logs [2] [3]


[1] https://review.openstack.org/#/c/367012/

[2]
http://logs.openstack.org/12/367012/28/check/puppet-openstack-integration-4-scenario003-tempest-centos-7/3f5252b/logs/magnum/magnum-api.txt.gz#_2018-05-16_15_10_36_010

[3]
http://logs.openstack.org/12/367012/28/check/puppet-openstack-integration-4-scenario003-tempest-centos-7/3f5252b/


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [python3] flake8 and pycodestyle W60x warnings

2018-05-17 Thread IWAMOTO Toshihiro
pycodestyle-2.4.0 added new warnings W605 and W606, which needs to be
addressed or future versions of python3 will refuse to run.

https://github.com/PyCQA/pycodestyle/pull/676

(OpenStack CI's flake8 is pre pycodestyle age, flake8 version isn't
managed by g-r, but that's another story. No release of flake8
supports pycodestyle 2.4.0 yet. ;)

nova seems to have ~200 of those warnings, while other projects don't
have much, FWIW.

--
IWAMOTO Toshihiro

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Bug deputy

2018-05-17 Thread Gary Kotton
Hi,
An urgent matter has come up this week. If possible, can someone please replace 
me.
Sorry
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cyborg] [nova] Cyborg quotas

2018-05-17 Thread Alex Xu
2018-05-17 9:38 GMT+08:00 Alex Xu :

>
>
> 2018-05-17 1:24 GMT+08:00 Jay Pipes :
>
>> On 05/16/2018 01:01 PM, Nadathur, Sundar wrote:
>>
>>> Hi,
>>> The Cyborg quota spec [1] proposes to implement a quota (maximum
>>> usage) for accelerators on a per-project basis, to prevent one project
>>> (tenant) from over-using some resources and starving other tenants. There
>>> are separate resource classes for different accelerator types (GPUs, FPGAs,
>>> etc.), and so we can do quotas per RC.
>>>
>>> The current proposal [2] is to track the usage in Cyborg agent/driver. I
>>> am not sure that scheme will work, as I have indicated in the comments on
>>> [1]. Here is another possible way.
>>>
>>>   * The operator configures the oslo.limit in keystone per-project
>>> per-resource-class (GPU, FPGA, ...).
>>>   o Until this gets into Keystone, Cyborg may define its own quota
>>> table, as defined in [1].
>>>   * Cyborg implements a table to track per-project usage, as defined in
>>> [1].
>>>
>>
>> Placement already stores usage information for all allocations of
>> resources. There is already even a /usages API endpoint that you can
>> specify a project and/or user:
>>
>> https://developer.openstack.org/api-ref/placement/#list-usages
>>
>> I see no reason not to use it.
>>
>> There is already actually a spec to use placement for quota usage checks
>> in Nova here:
>>
>> https://review.openstack.org/#/c/509042/
>
>
> FYI, I'm working on a spec which append to that spec. It's about counting
> quota for the resource class(GPU, custom RC, etc) other than nova built-in
> resources(cores, ram). It should be able to count the resource classes
> which are used by cyborg. But yes, we probably should answer Matt's
> question first, whether we should let Nova count quota instead of Cyborg.
>

here is the line https://review.openstack.org/#/c/569011/


>
>
>>
>>
>> Probably best to have a look at that and see if it will end up meeting
>> your needs.
>>
>>   * Cyborg provides a filter for the Nova scheduler, which checks
>>> whether the project making the request has exceeded its own quota.
>>>
>>
>> Quota checks happen before Nova's scheduler gets involved, so having a
>> scheduler filter handle quota usage checking is pretty much a non-starter.
>>
>> I'll have a look at the patches you've proposed and comment there.
>>
>> Best,
>> -jay
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev