Re: [openstack-dev] [PTG] [Infra] [all] zuulv3 Job Template vs irrelevant files/branch var

2018-03-07 Thread Andreas Jaeger
On 2018-03-08 02:44, Ghanshyam Mann wrote:
> Hi All,
> 
> Before PTG, we were discussing about Job Template and irrelevant files
> issues on multiple mailing thread [1].
> 
> Both things does not work as expected and it leads to run the jobs on
> irrelevant files also and on excluded branch.
> 
> In Dublin PTG, during infra help hours on Tuesday, we had talk on this
> topic and to find the best approach.
> 
> First of all thanks to Jim for explaining the workflow of zuulv3 about
> selecting and integrating the matched jobs. How jobs are being matched
> and how variables like branch and irrelevant-files are being taken
> care between job definition and job template and project's pipeline
> list.
> 
> Current issue (explained in ML [1]) is with the integrated-gate job
> template [2] where integrated job like tempest-full are being run.
> Other job template like 'system-required', 'openstack-python-jobs'
> etc.
> 
> After discussion, It is found more complicated to solve these issues
> as of now and it might take time for Jim/infra team to come up with
> better way to handle job template and irrelevant_files/branch var etc.
> 
> We talked about few possibilities like one way is to supersede the job
> template defined var by project's pipeline list. For example if
> irrelevant_files are defined by both job template and project's
> pipelines then ignore/skip the job template values of that var or all
> var. But this is just idea and not sure how feasible and best it can
> be.
> 
> But till the best approach/solution is ready, we need to have some
> workaround as current issue cause running many jobs on unnecessary
> patches and consume lot of infra resources.
> 
> We discussed few of the workaround mentioned below and we can go for
> one based on majority of people or infra team like/suggest-
> 1. Do not use integrated-gate template and let each project have the
> jobs in their pipeline list
> 2. Define all the irrelevant files for each projects in job template ?
> 3. Leave as it is.
> 
> ..1 
> http://lists.openstack.org/pipermail/openstack-dev/2018-February/127349.html
>  
> http://lists.openstack.org/pipermail/openstack-dev/2018-February/127347.html
> 
> ..2 
> https://github.com/openstack-infra/openstack-zuul-jobs/blob/49cd964470c081005f671d6829a14dace2c9ccc2/zuul.d/zuul-legacy-project-templates.yaml#L82

I'm fine with option 2 for those projects that want to do some changes
for now.

Breaking up the integrated-gate will cause more maintenance problems.

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-24 and R-23, March 12-23

2018-03-07 Thread Sean McGinnis
Welcome back to our regular release countdown email. Now that the PTG is over
(hopefully no one is still waiting for their flight in DUB), we will send
regular weekly countdown emails.

Development Focus
-

Teams should be focusing on taking back discussions from the PTG and planning
what can be done for Rocky.

General Information
---

All teams should review their release liaison information and make sure it is
up to date [1].

[1] https://wiki.openstack.org/wiki/CrossProjectLiaisons

While reviewing liaisons, this would also be a good time to make sure your
declared release model matches the project's plans for Rocky (e.g. [2]). This
should be done prior to the first milestone and can be done by proposing a
change to the Rocky deliverable file for the project(s) affected [3].

[2] 
https://github.com/openstack/releases/blob/e0a63f7e896abdf4d66fb3ebeaacf4e17f688c38/deliverables/queens/glance.yaml#L5
[3] http://git.openstack.org/cgit/openstack/releases/tree/deliverables/rocky

Teams should start brainstorming Forum topics. For more information on the
Forum selection process, see the information posted to the mailing list [4].

[4] http://lists.openstack.org/pipermail/openstack-dev/2018-March/127944.htmlx

Upcoming Deadlines & Dates
--

Rocky-1 milestone: April 19 (R-19 week)
Forum at OpenStack Summit in Vancouver: May 21-24

-- 
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [oslo] new unified limit library

2018-03-07 Thread Joshua Harlow

So the following was a prior effort:

https://github.com/openstack/delimiter

Maybe just continue down the path of that and/or take that whole repo 
over and iterate (or adjust the prior code, or ...)?? Or if not that's 
ok to, ya'll get to decide.


https://www.slideshare.net/vilobh/delimiter-openstack-cross-project-quota-library-proposal

Lance Bragstad wrote:

Hi all,

Per the identity-integration track at the PTG [0], I proposed a new oslo
library for services to use for hierarchical quota enforcement [1]. Let
me know if you have any questions or concerns about the library. If the
oslo team would like, I can add an agenda item for next weeks oslo
meeting to discuss.

Thanks,

Lance

[0] https://etherpad.openstack.org/p/unified-limits-rocky-ptg
[1] https://review.openstack.org/#/c/550491/



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cyborg][glance][nova]cyborg FPGA management flow disscusion.

2018-03-07 Thread Zhipeng Huang
Thanks Shaohe,

Let's schedule a video conf session next week.

On Thu, Mar 8, 2018 at 11:41 AM, Feng, Shaohe  wrote:

> Hi All:
>
> The POC is here:
> *https://github.com/shaohef/cyborg* 
>
> BR
> Shaohe Feng
>
> _
> *From:* Feng, Shaohe
> *Sent:* 2018年2月12日 15:06
> *To:* openstack-dev@lists.openstack.org; openstack-operators@lists.
> openstack.org
> *Cc:* Du, Dolpher ; Zhipeng Huang <
> zhipengh...@gmail.com>; Ding, Jian-feng ; Sun,
> Yih Leong ; Nadathur, Sundar <
> sundar.nadat...@intel.com>; Dutch ; Rushil Chugh <
> rushil.ch...@gmail.com>; Nguyen Hung Phuong ;
> Justin Kilpatrick ; Ranganathan, Shobha <
> shobha.ranganat...@intel.com>; zhuli ;
> bao.yum...@zte.com.cn; xiaodong...@tencent.com; kong.w...@zte.com.cn;
> li.xia...@zte.com.cn; Feng, Shaohe 
> *Subject:* [openstack-dev][cyborg][glance][nova]cyborg FPGA management
> flow disscusion.
>
>
> Now I am working on an FPGA management POC with Dolpher.
> We have finished some code, and have discussion with Li Liu and some
> cyborg developer guys.
>
> Here are some discussions:
>
> image management
> 1. User should upload the FPGA image to glance and set the tags as follow:
> There are two suggestions to upload an FPGA image.
> A. use raw glance api like:
>$ openstack image create --file mypath/FPGA.img  fpga.img
>$ openstack image set --tag FPGA --property vendor=intel --property
> type=crypto 58b813db-1fb7-43ec-b85c-3b771c685d22
>The image must have "FPGA" tag and accelerator type(such as
> type=crypto).
> B. cyborg support a new api to upload a image.
>This API will wrap glance api and include the above steps, also make
> image record in it's local DB.
>
> 2. Cyborg agent/conductor get the FPGA image info from glance.
> There are also two suggestions to get the FPGA image info.
> A. use raw glance api.
> Cyborg will get the images by FPGA tag and timestamp periodically and
> store them in it's local cache.
> It will use the images tags and properties to form placement taits and
> resource_class name.
> B. store the imformations when call cybort's new upload API.
>
> 3. Image download.
> call glance image download API to local file. and make a corresponding md5
> files for checksum.
>
> GAP in image management:
> missing related glance image client in cyborg.
>
> resource report management for scheduler.
> 1.  Cyborg agent/conductor need synthesize all useful information from
> FPGA driver and image information.
> The traits will be like:
> CUSTOM_FPGA, CUSTOM_ACCELERATOR_CRYPTO,
> The resource_class will be like:
> CUSTOM_FPGA_INTEL_PF, CUSTOM_FPGA_INTEL_VF
> {"inventories":
> "CUSTOM_FPGA_INTEL_PF": {
> "allocation_ratio": 1.0,
> "max_unit": 4,
> "min_unit": 1,
> "reserved": 0,
> "step_size": 1,
> "total": 4
> }
> }
>
>
> Accelerator claim and release:
> 1. Cybort will support the releated API for accelerator claim and release.
> It can pass the follow parameters:
>   nodename: Which host that accelerator located on, it is required.
>   type: This accelerator type, cyborg can get image uuid by it. it is
> optional.
>   image uuid: the uuid of FPGA bitstream image, . it is optional.
>   traits: the traits info that cyborg reports to placement.
>   resource_class: the resource_class name that reports to placement.
> And return the address for the accelerator. At present, it is the
> PCIE_ADDRESS.
> 2. When claim an accelerator, type and image is None, cybort will not
> program the fpga for user.
>
> FPGA accelerator program API:
> We still need to support an independent program API for some specific
> scenarios.
> Such as as a FPGA developer, I will change my verilog logical frequently
> and need to do verification on my guest.
> I upload my new bitstream image to glance, and call cyborg to program my
> FPGA accelerator.
>
> End user operations follow:
> 1. upload an bitstream image to glance if necessary and set its tags(at
> least FPGA is requied) and property.
>sucn as: --tag FPGA --property vendor=intel --property type=crypto
> 2. list the FPGA related traits and resource_class names by placement API.
>such as get "CUSTOM_FPGA_INTEL_PF" resource_class names and
> "CUSTOM_HW_INTEL,CUSTOM_HW_CRYPTO" traits.
> 3. create a new falvor wiht his expected traits and resource_class as
> extra spec.
>such as:
>"resourcesn:CUSTOM_FPGA_INTEL_PF=2"  n is an integer or empty
> string.
>"required:CUSTOM_HW_INTEL,CUSTOM_HW_CRYPTO".
> 4. create the VM with this flavor.
>
>
> BR
> Shaohe Feng
>
>
>



-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, 

Re: [openstack-dev] [cyborg][glance][nova]cyborg FPGA management flow disscusion.

2018-03-07 Thread Feng, Shaohe
Hi All:

The POC is here:
https://github.com/shaohef/cyborg

BR
Shaohe Feng

_
From: Feng, Shaohe
Sent: 2018年2月12日 15:06
To: openstack-dev@lists.openstack.org; openstack-operat...@lists.openstack.org
Cc: Du, Dolpher ; Zhipeng Huang ; 
Ding, Jian-feng ; Sun, Yih Leong 
; Nadathur, Sundar ; Dutch 
; Rushil Chugh ; Nguyen Hung 
Phuong ; Justin Kilpatrick ; 
Ranganathan, Shobha ; zhuli ; 
bao.yum...@zte.com.cn; xiaodong...@tencent.com; kong.w...@zte.com.cn; 
li.xia...@zte.com.cn; Feng, Shaohe 
Subject: [openstack-dev][cyborg][glance][nova]cyborg FPGA management flow 
disscusion.


Now I am working on an FPGA management POC with Dolpher.
We have finished some code, and have discussion with Li Liu and some cyborg 
developer guys.

Here are some discussions:

image management
1. User should upload the FPGA image to glance and set the tags as follow:
There are two suggestions to upload an FPGA image.
A. use raw glance api like:
   $ openstack image create --file mypath/FPGA.img  fpga.img
   $ openstack image set --tag FPGA --property vendor=intel --property 
type=crypto 58b813db-1fb7-43ec-b85c-3b771c685d22
   The image must have "FPGA" tag and accelerator type(such as type=crypto).
B. cyborg support a new api to upload a image.
   This API will wrap glance api and include the above steps, also make image 
record in it's local DB.

2. Cyborg agent/conductor get the FPGA image info from glance.
There are also two suggestions to get the FPGA image info.
A. use raw glance api.
Cyborg will get the images by FPGA tag and timestamp periodically and store 
them in it's local cache.
It will use the images tags and properties to form placement taits and 
resource_class name.
B. store the imformations when call cybort's new upload API.

3. Image download.
call glance image download API to local file. and make a corresponding md5 
files for checksum.

GAP in image management:
missing related glance image client in cyborg.

resource report management for scheduler.
1.  Cyborg agent/conductor need synthesize all useful information from FPGA 
driver and image information.
The traits will be like:
CUSTOM_FPGA, CUSTOM_ACCELERATOR_CRYPTO,
The resource_class will be like:
CUSTOM_FPGA_INTEL_PF, CUSTOM_FPGA_INTEL_VF
{"inventories":
"CUSTOM_FPGA_INTEL_PF": {
"allocation_ratio": 1.0,
"max_unit": 4,
"min_unit": 1,
"reserved": 0,
"step_size": 1,
"total": 4
}
}


Accelerator claim and release:
1. Cybort will support the releated API for accelerator claim and release.
It can pass the follow parameters:
  nodename: Which host that accelerator located on, it is required.
  type: This accelerator type, cyborg can get image uuid by it. it is optional.
  image uuid: the uuid of FPGA bitstream image, . it is optional.
  traits: the traits info that cyborg reports to placement.
  resource_class: the resource_class name that reports to placement.
And return the address for the accelerator. At present, it is the PCIE_ADDRESS.
2. When claim an accelerator, type and image is None, cybort will not program 
the fpga for user.

FPGA accelerator program API:
We still need to support an independent program API for some specific scenarios.
Such as as a FPGA developer, I will change my verilog logical frequently and 
need to do verification on my guest.
I upload my new bitstream image to glance, and call cyborg to program my FPGA 
accelerator.

End user operations follow:
1. upload an bitstream image to glance if necessary and set its tags(at least 
FPGA is requied) and property.
   sucn as: --tag FPGA --property vendor=intel --property type=crypto
2. list the FPGA related traits and resource_class names by placement API.
   such as get "CUSTOM_FPGA_INTEL_PF" resource_class names and 
"CUSTOM_HW_INTEL,CUSTOM_HW_CRYPTO" traits.
3. create a new falvor wiht his expected traits and resource_class as extra 
spec.
   such as:
   "resourcesn:CUSTOM_FPGA_INTEL_PF=2"  n is an integer or empty string.
   "required:CUSTOM_HW_INTEL,CUSTOM_HW_CRYPTO".
4. create the VM with this flavor.


BR
Shaohe Feng


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [oslo] new unified limit library

2018-03-07 Thread ChangBo Guo
Yeah, we need a unified limit library ,  from oslo side  we need a spec
according to new  library process. The spec will be useful to track the
background  and update  oslo wiki [1]


[0]
http://specs.openstack.org/openstack/oslo-specs/specs/policy/new-libraries.html
[1] https://wiki.openstack.org/wiki/Oslo

2018-03-07 22:58 GMT+08:00 Lance Bragstad :

> Hi all,
>
> Per the identity-integration track at the PTG [0], I proposed a new oslo
> library for services to use for hierarchical quota enforcement [1]. Let
> me know if you have any questions or concerns about the library. If the
> oslo team would like, I can add an agenda item for next weeks oslo
> meeting to discuss.
>
> Thanks,
>
> Lance
>
> [0] https://etherpad.openstack.org/p/unified-limits-rocky-ptg
> [1] https://review.openstack.org/#/c/550491/
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
ChangBo Guo(gcb)
Community Director @EasyStack
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api-wg][api][neutron] How to handle invalid query parameters

2018-03-07 Thread Ghanshyam Mann
On Thu, Mar 8, 2018 at 6:12 AM, Chris Dent  wrote:
> On Wed, 7 Mar 2018, Hongbin Lu wrote:
>
>> As a brief recap, we were discussing how Neutron API server should behave
>> if invalid query parameters were inputted. Per my understanding, the general
>> consensus is to make Neutron API server behave consistently with other
>> OpenStack projects. The question for API-WG is if there is any guideline to
>> clarify how OpenStack projects should handle invalid query parameters. Query
>> parameters are various across different projects but it seems most projects
>> support these four categories of query parameters: sorting, pagination,
>> filtering, and fields selection. I saw API-WG provided a guideline to define
>> how to handle valid parameters of these categories [2], but it doesn’t seem
>> to define how to handle invalid parameters.
>>
>> I wonder if API-WG could clarify it. For example, if users provide an
>> invalid filter on listing the resources, should the API server ignore the
>> invalid filter and return a successful response? Or it should return an
>> error response? Below is a list of specific scenarios and examples to
>> consider:
>
>
> It's hard to find, but there's existing guidance that touches on
> this. From
> http://specs.openstack.org/openstack/api-wg/guidelines/http.html#failure-code-clarifications
> :
>
> [I]f the API supports query parameters and a request contains an
> unknown or unsupported parameter, the server should return a 400
> Bad Request response. Invalid values in the request URL should
> never be silently ignored, as the response may not match the
> client’s expectation. For example, consider the case where an
> API allows filtering on name by specifying ‘?name=foo’ in the
> query string, and in one such request there is a typo, such as
> ‘?nmae=foo’. If this error were silently ignored, the user would
> get back all resources instead of just the ones named ‘foo’,
> which would not be correct.  The error message that is returned
> should clearly indicate the problem so that the user could
> correct it and re-submit.
>
> This same logic can be applied to invalid fields used in parameters
> which can only accept a limited number of inputs (such as sort_key)
> so in the examples you give a 400 would be the way to ensure that
> the user agent is actually made aware that their request had issues.

+1. Nova also implemented query parameters validation using JSON
Schema [1] and 400 for few sorting param which were mainly joined
table and ignore others. We had to leave and ignore the unsupported
parameter as of now due to backward compatibility. But with newly
introduced API, we follow the above guidelines and 400 on any
additional or wrong parameter. Example [2].

>
> I hope this helps. Please let the api-sig know if you think we
> should adjust the guidelines to make this more explicit somehow.
>

..1 
https://github.com/openstack/nova/blob/c7b54a80ac25f6a01d0a150c546532f5ae2592ce/nova/api/openstack/compute/schemas/servers.py#L334
..2 
https://github.com/openstack/nova/blob/c7b54a80ac25f6a01d0a150c546532f5ae2592ce/nova/api/openstack/compute/schemas/migrations.py#L43

> --
> Chris Dent  (⊙_⊙') https://anticdent.org/
> freenode: cdent tw: @anticdent
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Meeting Thursday Mar 8th at 8:00 UTC

2018-03-07 Thread Ghanshyam Mann
Hello everyone,

Hope everyone is back to home after Dublin PTG.

This is reminder for QA team meeting on Thursday, Mar 8th at 8:00 UTC
in the #openstack-meeting channel.

The agenda for the meeting can be found here:
https://wiki.openstack.org/wiki/Meetings/QATeamMeeting#Agenda_for_Mar_8th_2018_.280800_UTC.29

We discussed about new meeting/office hour times which we will discuss
in this meeting and then publish it to ML and wiki etc.

Anyone is welcome to add an item to the agenda.

-gmann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [PTG] [Infra] [all] zuulv3 Job Template vs irrelevant files/branch var

2018-03-07 Thread Ghanshyam Mann
Hi All,

Before PTG, we were discussing about Job Template and irrelevant files
issues on multiple mailing thread [1].

Both things does not work as expected and it leads to run the jobs on
irrelevant files also and on excluded branch.

In Dublin PTG, during infra help hours on Tuesday, we had talk on this
topic and to find the best approach.

First of all thanks to Jim for explaining the workflow of zuulv3 about
selecting and integrating the matched jobs. How jobs are being matched
and how variables like branch and irrelevant-files are being taken
care between job definition and job template and project's pipeline
list.

Current issue (explained in ML [1]) is with the integrated-gate job
template [2] where integrated job like tempest-full are being run.
Other job template like 'system-required', 'openstack-python-jobs'
etc.

After discussion, It is found more complicated to solve these issues
as of now and it might take time for Jim/infra team to come up with
better way to handle job template and irrelevant_files/branch var etc.

We talked about few possibilities like one way is to supersede the job
template defined var by project's pipeline list. For example if
irrelevant_files are defined by both job template and project's
pipelines then ignore/skip the job template values of that var or all
var. But this is just idea and not sure how feasible and best it can
be.

But till the best approach/solution is ready, we need to have some
workaround as current issue cause running many jobs on unnecessary
patches and consume lot of infra resources.

We discussed few of the workaround mentioned below and we can go for
one based on majority of people or infra team like/suggest-
1. Do not use integrated-gate template and let each project have the
jobs in their pipeline list
2. Define all the irrelevant files for each projects in job template ?
3. Leave as it is.

..1 http://lists.openstack.org/pipermail/openstack-dev/2018-February/127349.html
 
http://lists.openstack.org/pipermail/openstack-dev/2018-February/127347.html

..2 
https://github.com/openstack-infra/openstack-zuul-jobs/blob/49cd964470c081005f671d6829a14dace2c9ccc2/zuul.d/zuul-legacy-project-templates.yaml#L82

-gmann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tacker] tacker project team meeting is changed to GMT 0800 on Tuesdays

2018-03-07 Thread 龚永生
FYI
https://review.openstack.org/#/c/550326/







yong sheng gong
99CLOUD Co. Ltd.
Email:gong.yongsh...@99cloud.net
Addr : Room 806, Tower B, Jiahua Building, No. 9 Shangdi 3rd Street, Haidian 
District, Beijing, China
Mobile:+86-18618199879
http://99cloud.net __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] batch processing with unified limits

2018-03-07 Thread Lance Bragstad
The keystone team is parsing the unified limits discussions from last
week. One of the things we went over as a group was the usability of the
current API [0].

Currently, the create and update APIs support batch processing. So
specifying a list of limits is valid for both. This was a part of the
original proposal as a way to make it easier for operators to set all
their registered limits with a single API call. The API also has unique
IDs for each limit reference. The consensus was that this felt a bit
weird with a resource that contains a unique set of attributes that can
make up a constraints (service, resource type, and optionally a region).
We're discussing ways to make this API more consistent with how the rest
of keystone works while maintaining usability for operators. Does anyone
see issues with supporting batch creation for limits and individual
updates? In other words, removing the ability to update a set of limits
in a single API call, but keeping the ability to create them in batches?

We were talking about this in the keystone channel, but opening this up
on the ML to get more feedback from other people who were present in
those discussions last week.

[0]
https://developer.openstack.org/api-ref/identity/v3/index.html#unified-limits
[1]
http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-03-07.log.html#t2018-03-07T22:49:46




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [keystone] [oslo] new unified limit library

2018-03-07 Thread Chris Friesen

On 03/07/2018 10:44 AM, Tim Bell wrote:

I think nested quotas would give the same thing, i.e. you have a parent project
for the group and child projects for the users. This would not need user/group
quotas but continue with the ‘project owns resources’ approach.


Agreed, I think that if we support nested quotas with a suitable depth of 
nesting it could be used to handle the existing nova user/project quotas.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-10

2018-03-07 Thread Matt Riedemann

On 3/7/2018 2:24 PM, Lance Bragstad wrote:

I tried bringing this up during the PTG feedback session last Thursday


Unless you wanted to talk about snow, there was no feedback to be had at 
the feedback session.


Being able to actually give feedback on the PTG during the PTG feedback 
session is some unsolicited feedback that I'm going to give now.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api-wg][api][neutron] How to handle invalid query parameters

2018-03-07 Thread Hongbin Lu
Hi all,

Please disregard the email below since I used the wrong template. Sorry about 
that. The email with the same content was re-sent in a new thread 
http://lists.openstack.org/pipermail/openstack-dev/2018-March/128022.html .

Best regards,
Hongbin

From: Hongbin Lu
Sent: March-07-18 4:02 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [api-wg][api][neutron] How to handle invalid query parameters

Hi all,

This is a follow-up for the discussion in Dublin PTG about how Neutron API 
server should handle invalid query parameter [1]. According to the feedback, I 
sent this ML to seek advice from API-WG in this regards.

As a brief recap, we were discussing how Neutron API server should behave if 
invalid query parameters were inputted. Per my understanding, the general 
consensus is to make Neutron API server behave consistently with other 
OpenStack projects. The question for API-WG is if there is any guideline to 
clarify how OpenStack projects should handle invalid query parameters. Query 
parameters are various across different projects but it seems most projects 
support these four categories of query parameters: sorting, pagination, 
filtering, and fields selection. I saw API-WG provided a guideline to define 
how to handle valid parameters of these categories [2], but it doesn’t seem to 
define how to handle invalid parameters.

I wonder if API-WG could clarify it. For example, if users provide an invalid 
filter on listing the resources, should the API server ignore the invalid 
filter and return a successful response? Or it should return an error response? 
Below is a list of specific scenarios and examples to consider:

1. Invalid sorting. For example:

  GET "/v2.0/networks?sort_dir=desc_key="
  GET "/v2.0/networks?sort_dir=_key=xxx"

2. Invalid pagination. For example:

  GET "/v2.0/networks?limit==xxx"
  GET "/v2.0/networks?limit=1="

3. Invalid filter. For example:

GET "/v2.0/networks?=xxx"
GET "/v2.0/networks?xxx="

4. Invalid field. For example:

  GET "/v2.0/networks?fields="

Best regards,
Hongbin

[1] https://bugs.launchpad.net/neutron/+bug/1749820
[2] 
https://specs.openstack.org/openstack/api-wg/guidelines/pagination_filter_sort.html



华为技术有限公司 Huawei Technologies Co., Ltd.
[Company_logo]


 本邮件及其附件含有华为公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁
止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中
的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件!
This e-mail and its attachments contain confidential information from HUAWEI, 
which
is intended only for the person or entity whose address is listed above. Any 
use of the
information contained herein in any way (including, but not limited to, total 
or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify 
the sender by
phone or email immediately and delete it!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api-wg][api][neutron] How to handle invalid query parameters

2018-03-07 Thread Chris Dent

On Wed, 7 Mar 2018, Hongbin Lu wrote:


As a brief recap, we were discussing how Neutron API server should behave if 
invalid query parameters were inputted. Per my understanding, the general 
consensus is to make Neutron API server behave consistently with other 
OpenStack projects. The question for API-WG is if there is any guideline to 
clarify how OpenStack projects should handle invalid query parameters. Query 
parameters are various across different projects but it seems most projects 
support these four categories of query parameters: sorting, pagination, 
filtering, and fields selection. I saw API-WG provided a guideline to define 
how to handle valid parameters of these categories [2], but it doesn’t seem to 
define how to handle invalid parameters.

I wonder if API-WG could clarify it. For example, if users provide an invalid 
filter on listing the resources, should the API server ignore the invalid 
filter and return a successful response? Or it should return an error response? 
Below is a list of specific scenarios and examples to consider:


It's hard to find, but there's existing guidance that touches on
this. From
http://specs.openstack.org/openstack/api-wg/guidelines/http.html#failure-code-clarifications
 :

[I]f the API supports query parameters and a request contains an
unknown or unsupported parameter, the server should return a 400
Bad Request response. Invalid values in the request URL should
never be silently ignored, as the response may not match the
client’s expectation. For example, consider the case where an
API allows filtering on name by specifying ‘?name=foo’ in the
query string, and in one such request there is a typo, such as
‘?nmae=foo’. If this error were silently ignored, the user would
get back all resources instead of just the ones named ‘foo’,
which would not be correct.  The error message that is returned
should clearly indicate the problem so that the user could
correct it and re-submit.

This same logic can be applied to invalid fields used in parameters
which can only accept a limited number of inputs (such as sort_key)
so in the examples you give a 400 would be the way to ensure that
the user agent is actually made aware that their request had issues.

I hope this helps. Please let the api-sig know if you think we
should adjust the guidelines to make this more explicit somehow.

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api-wg][api][neutron] How to handle invalid query parameters

2018-03-07 Thread Hongbin Lu
Hi all,

This is a follow-up for the discussion in Dublin PTG about how Neutron API 
server should handle invalid query parameter [1]. According to the feedback, I 
sent this ML to seek advice from API-WG in this regards.

As a brief recap, we were discussing how Neutron API server should behave if 
invalid query parameters were inputted. Per my understanding, the general 
consensus is to make Neutron API server behave consistently with other 
OpenStack projects. The question for API-WG is if there is any guideline to 
clarify how OpenStack projects should handle invalid query parameters. Query 
parameters are various across different projects but it seems most projects 
support these four categories of query parameters: sorting, pagination, 
filtering, and fields selection. I saw API-WG provided a guideline to define 
how to handle valid parameters of these categories [2], but it doesn't seem to 
define how to handle invalid parameters.

I wonder if API-WG could clarify it. For example, if users provide an invalid 
filter on listing the resources, should the API server ignore the invalid 
filter and return a successful response? Or it should return an error response? 
Below is a list of specific scenarios and examples to consider:

1. Invalid sorting. For example:

  GET "/v2.0/networks?sort_dir=desc_key="
  GET "/v2.0/networks?sort_dir=_key=xxx"

2. Invalid pagination. For example:

  GET "/v2.0/networks?limit==xxx"
  GET "/v2.0/networks?limit=1="

3. Invalid filter. For example:

GET "/v2.0/networks?=xxx"
GET "/v2.0/networks?xxx="

4. Invalid field. For example:

  GET "/v2.0/networks?fields="

Best regards,
Hongbin

[1] https://bugs.launchpad.net/neutron/+bug/1749820
[2] 
https://specs.openstack.org/openstack/api-wg/guidelines/pagination_filter_sort.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api-wg][api][neutron] How to handle invalid query parameters

2018-03-07 Thread Hongbin Lu
Hi all,

This is a follow-up for the discussion in Dublin PTG about how Neutron API 
server should handle invalid query parameter [1]. According to the feedback, I 
sent this ML to seek advice from API-WG in this regards.

As a brief recap, we were discussing how Neutron API server should behave if 
invalid query parameters were inputted. Per my understanding, the general 
consensus is to make Neutron API server behave consistently with other 
OpenStack projects. The question for API-WG is if there is any guideline to 
clarify how OpenStack projects should handle invalid query parameters. Query 
parameters are various across different projects but it seems most projects 
support these four categories of query parameters: sorting, pagination, 
filtering, and fields selection. I saw API-WG provided a guideline to define 
how to handle valid parameters of these categories [2], but it doesn’t seem to 
define how to handle invalid parameters.

I wonder if API-WG could clarify it. For example, if users provide an invalid 
filter on listing the resources, should the API server ignore the invalid 
filter and return a successful response? Or it should return an error response? 
Below is a list of specific scenarios and examples to consider:

1. Invalid sorting. For example:

  GET "/v2.0/networks?sort_dir=desc_key="
  GET "/v2.0/networks?sort_dir=_key=xxx"

2. Invalid pagination. For example:

  GET "/v2.0/networks?limit==xxx"
  GET "/v2.0/networks?limit=1="

3. Invalid filter. For example:

GET "/v2.0/networks?=xxx"
GET "/v2.0/networks?xxx="

4. Invalid field. For example:

  GET "/v2.0/networks?fields="

Best regards,
Hongbin

[1] https://bugs.launchpad.net/neutron/+bug/1749820
[2] 
https://specs.openstack.org/openstack/api-wg/guidelines/pagination_filter_sort.html



华为技术有限公司 Huawei Technologies Co., Ltd.
[Company_logo]


 本邮件及其附件含有华为公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁
止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中
的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件!
This e-mail and its attachments contain confidential information from HUAWEI, 
which
is intended only for the person or entity whose address is listed above. Any 
use of the
information contained herein in any way (including, but not limited to, total 
or partial
disclosure, reproduction, or dissemination) by persons other than the intended
recipient(s) is prohibited. If you receive this e-mail in error, please notify 
the sender by
phone or email immediately and delete it!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI][QA][HA][Eris][LCOO] Validating HA on upstream

2018-03-07 Thread Georg Kunz
Hi Adam,

> Raoul Scarazzini  wrote:
> >On 06/03/2018 13:27, Adam Spiers wrote:
> >> Hi Raoul and all,
> >> Sorry for joining this discussion late!
> >[...]
> >> I do not work on TripleO, but I'm part of the wider OpenStack
> >> sub-communities which focus on HA[0] and more recently,
> >> self-healing[1].  With that hat on, I'd like to suggest that maybe
> >> it's possible to collaborate on this in a manner which is agnostic to
> >> the deployment mechanism.  There is an open spec on this>
> >> https://review.openstack.org/#/c/443504/
> >> which was mentioned in the Denver PTG session on destructive testing
> >> which you referenced[2].
> >[...]
> >>    https://www.opnfv.org/community/projects/yardstick
> >[...]
> >> Currently each sub-community and vendor seems to be reinventing HA
> >> testing by itself to some extent, which is easier to accomplish in
> >> the short-term, but obviously less efficient in the long-term.  It
> >> would be awesome if we could break these silos down and join efforts!
> >> :-)
> >
> >Hi Adam,
> >First of all thanks for your detailed answer. Then let me be honest
> >while saying that I didn't know yardstick.
> 
> Neither did I until Sydney, despite being involved with OpenStack HA for
> many years ;-)  I think this shows that either a) there is room for improved
> communication between the OpenStack and OPNFV communities, or b) I
> need to take my head out of the sand more often ;-)
> 
> >I need to start from scratch
> >here to understand what this project is. In any case, the exact meaning
> >of this thread is to involve people and have a more comprehensive look
> >at what's around.
> >The point here is that, as you can see from the tripleo-ha-utils spec
> >[1] I've created, the project is meant for TripleO specifically. On one
> >side this is a significant limitation, but on the other one, due to the
> >pluggable nature of the project, I think that integrations with other
> >software like you are proposing is not impossible.
> 
> Yep.  I totally sympathise with the tension between the need to get
> something working quickly, vs. the need to collaborate with the community
> in the most efficient way.
> 
> >Feel free to add your comments to the review.
> 
> The spec looks great to me; I don't really have anything to add, and I don't
> feel comfortable voting in a project which I know very little about.
> 
> >In the meantime, I'll check yardstick to see which kind of bridge we
> >can build to avoid reinventing the wheel.
> 
> Great, thanks!  I wish I could immediately help with this, but I haven't had 
> the
> chance to learn yardstick myself yet.  We should probably try to recruit
> someone from OPNFV to provide advice.  I've cc'd Georg who IIRC was the
> person who originally told me about yardstick :-)  He is an NFV expert and is
> also very interested in automated testing efforts:
> 
> http://lists.openstack.org/pipermail/openstack-dev/2017-
> November/124942.html
> 
> so he may be able to help with this architectural challenge.

Thank you for bringing this up here. Better collaboration and sharing of 
knowledge, methodologies and tools across the communities is really what I'd 
like to see and facilitate. Hence, I am happy to help.

I have already started to advertise the newly proposed QA SIG in the OPNFV test 
WG and I'll happily do the same for the self-healing SIG and any HA testing 
efforts in general. There is certainly some overlapping interest in these 
testing aspects between the QA SIG and the self-healing SIG and hence 
collaboration between both SIGs is crucial.

One remark regarding tools and frameworks: I consider the true value of a SIG 
to be a place for talking about methodologies and best practices: What do we 
need to test? What are the challenges? How can we approach this across 
communities? The tools and frameworks are important and we should investigate 
which tools are available, how good they are, how much they fit a given 
purpose, but at the end of the day they are tools meant to enable well designed 
testing methodologies.

> Also you should be aware that work has already started on Eris, the extreme
> testing framework proposed in this user story:
> 
> http://specs.openstack.org/openstack/openstack-user-stories/user-
> stories/proposed/openstack_extreme_testing.html
> 
> and in the spec you already saw:
> 
> https://review.openstack.org/#/c/443504/
> 
> You can see ongoing work here:
> 
> https://github.com/LCOO/eris
> https://openstack-
> lcoo.atlassian.net/wiki/spaces/LCOO/pages/13393034/Eris+-
> +Extreme+Testing+Framework+for+OpenStack
> 
> It looks like there is a plan to propose a new SIG for this, although 
> personally I
> would be very happy to see it adopted by the self-healing SIG, since this
> framework is exactly what is needed for testing any self-healing mechanism.
> 
> I'm hoping that Sampath and/or Gautum will chip in here, since I think they're
> currently the main drivers for Eris.
> 
> 

Re: [openstack-dev] [tc] [all] TC Report 18-10

2018-03-07 Thread Graham Hayes


On 07/03/18 20:24, Lance Bragstad wrote:
> 
> 
> On 03/07/2018 06:12 AM, Chris Dent wrote:
>>
>> HTML: https://anticdent.org/tc-report-18-10.html
>>
>> This is a TC Report, but since everything that happened in its window
>> of observation is preparing for the
>> [PTG](https://www.openstack.org/ptg), being at the PTG, trying to get
>> home from the PTG, and recovering from the PTG, perhaps think of this
>> as "What the TC talked about [at] the PTG". As it is impossible to be
>> everywhere at once (especially when the board meeting overlaps with
>> other responsibilities) this will miss a lot of important stuff.  I
>> hope there are other summaries.
>>
>> As you may be aware, it [snowed in
>> Dublin](https://twitter.com/search?q=%23snowpenstack) causing plenty
>> of disruption to the
>> [PTG](https://twitter.com/search?q=%23openstackptg) but everyone
>> (foundation staff, venue staff, hotel staff, attendees, uisce beatha)
>> worked together to make a good week.
>>
>> # Talking about the PTG at the PTG
>>
>> At the [board
>> meeting](http://lists.openstack.org/pipermail/foundation/2018-March/002570.html),
>>
>> the future of the PTG was a big topic. As currently constituted it
>> presents some challenges:
>>
>> * It is difficult for some people to attend because of visa and other
>>   travel related issues.
>> * It is expensive to run and not everyone is convinced of the return
>>   on investment.
>> * Some people don't like it (they either miss the old way of doing the
>>   design summit, or midcycles, or $OTHER).
>> * Plenty of other reasons that I'm probably not aware of.
>>
>> This same topic was reviewed at [yesterday's office
>> hours](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T09:19:32).
>>
>>
>> For now, the next 2018 PTG is going to happen (destination unknown) but
>> plans for 2019 are still being discussed.
>>
>> If you have opinions about the PTG, there will be an opportunity to
>> express them in a forthcoming survey. Beyond that, however, it is
>> important [that management at contributing
>> companies](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T22:29:24)
>>
>> hear from more people (notably their employees) than the foundation
>> about the value of the PTG.
>>
>> My own position is that of the three different styles of in-person
>> events for technical contributors to OpenStack that I've experienced
>> (design summit, mid-cycles, PTG), the PTG is the best yet. It minimizes
>> distractions from other obligations (customer meetings, presentations,
>> marketing requirements) while maximizing cross-project interaction.
>>
>> One idea, discussed
>> [yesterday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T22:02:24)
>>
>> and [earlier
>> today](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-07.log.html#t2018-03-07T05:07:20)
>>
>> was to have the PTG be open to technical participants of any sort, not
>> just so-called "OpenStack developers". Make it more of a place for
>> people who hack on and with OpenStack to hack and talk. Leave the
>> summit (without a forum) for presentations, marketing, pre-sales, etc.
>>
>> An issue raised with conflating the PTG and the Forum is that it would
>> remove the
>> [inward/outward
>> focus](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-07.log.html#t2018-03-07T08:20:17)
>> concept that is supposed to distinguish the two events.
>>
>> I guess it depends on how we define "we" but I've always assumed that
>> both events were for outward focus and that for any inward focussing
>> effort we ought to be able use asynchronous tools more.
> I tried bringing this up during the PTG feedback session last Thursday,
> but figured I would highlight it here (it also kinda resonates with
> Matt's note, too).
> 
> Several projects have suffered from aggressive attrition, where there
> are only a few developers from a few companies. I fear going back to
> midcycles will be extremely tough with less corporate sponsorship. The
> PTGs are really where smaller teams can sit down with developers from
> other projects and work on cross-project issues.

This ^ . If we go back to the Design Summits, where these small projects
would get 3 or 4 40min slots, and very little chance of a mid-cycle,
it will cause teams issues.

>>
>> # Foundation and OCI
>>
>> Thierry mentioned
>> [yesterday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T09:08:04)
>>
>> that it is likely that the OpenStack Foundation will join the [Open
>> Container Initiative](https://www.opencontainers.org/) because of
>> [Kata](https://katacontainers.io/) and
>> [LOCI](https://governance.openstack.org/tc/reference/projects/loci.html).
>>
>> This segued into some brief concerns about the [attentions and
>> intentions of the
>> 

Re: [openstack-dev] [tc] [all] TC Report 18-10

2018-03-07 Thread Lance Bragstad


On 03/07/2018 06:12 AM, Chris Dent wrote:
>
> HTML: https://anticdent.org/tc-report-18-10.html
>
> This is a TC Report, but since everything that happened in its window
> of observation is preparing for the
> [PTG](https://www.openstack.org/ptg), being at the PTG, trying to get
> home from the PTG, and recovering from the PTG, perhaps think of this
> as "What the TC talked about [at] the PTG". As it is impossible to be
> everywhere at once (especially when the board meeting overlaps with
> other responsibilities) this will miss a lot of important stuff.  I
> hope there are other summaries.
>
> As you may be aware, it [snowed in
> Dublin](https://twitter.com/search?q=%23snowpenstack) causing plenty
> of disruption to the
> [PTG](https://twitter.com/search?q=%23openstackptg) but everyone
> (foundation staff, venue staff, hotel staff, attendees, uisce beatha)
> worked together to make a good week.
>
> # Talking about the PTG at the PTG
>
> At the [board
> meeting](http://lists.openstack.org/pipermail/foundation/2018-March/002570.html),
>
> the future of the PTG was a big topic. As currently constituted it
> presents some challenges:
>
> * It is difficult for some people to attend because of visa and other
>   travel related issues.
> * It is expensive to run and not everyone is convinced of the return
>   on investment.
> * Some people don't like it (they either miss the old way of doing the
>   design summit, or midcycles, or $OTHER).
> * Plenty of other reasons that I'm probably not aware of.
>
> This same topic was reviewed at [yesterday's office
> hours](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T09:19:32).
>
>
> For now, the next 2018 PTG is going to happen (destination unknown) but
> plans for 2019 are still being discussed.
>
> If you have opinions about the PTG, there will be an opportunity to
> express them in a forthcoming survey. Beyond that, however, it is
> important [that management at contributing
> companies](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T22:29:24)
>
> hear from more people (notably their employees) than the foundation
> about the value of the PTG.
>
> My own position is that of the three different styles of in-person
> events for technical contributors to OpenStack that I've experienced
> (design summit, mid-cycles, PTG), the PTG is the best yet. It minimizes
> distractions from other obligations (customer meetings, presentations,
> marketing requirements) while maximizing cross-project interaction.
>
> One idea, discussed
> [yesterday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T22:02:24)
>
> and [earlier
> today](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-07.log.html#t2018-03-07T05:07:20)
>
> was to have the PTG be open to technical participants of any sort, not
> just so-called "OpenStack developers". Make it more of a place for
> people who hack on and with OpenStack to hack and talk. Leave the
> summit (without a forum) for presentations, marketing, pre-sales, etc.
>
> An issue raised with conflating the PTG and the Forum is that it would
> remove the
> [inward/outward
> focus](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-07.log.html#t2018-03-07T08:20:17)
> concept that is supposed to distinguish the two events.
>
> I guess it depends on how we define "we" but I've always assumed that
> both events were for outward focus and that for any inward focussing
> effort we ought to be able use asynchronous tools more.
I tried bringing this up during the PTG feedback session last Thursday,
but figured I would highlight it here (it also kinda resonates with
Matt's note, too).

Several projects have suffered from aggressive attrition, where there
are only a few developers from a few companies. I fear going back to
midcycles will be extremely tough with less corporate sponsorship. The
PTGs are really where smaller teams can sit down with developers from
other projects and work on cross-project issues.
>
> # Foundation and OCI
>
> Thierry mentioned
> [yesterday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T09:08:04)
>
> that it is likely that the OpenStack Foundation will join the [Open
> Container Initiative](https://www.opencontainers.org/) because of
> [Kata](https://katacontainers.io/) and
> [LOCI](https://governance.openstack.org/tc/reference/projects/loci.html).
>
> This segued into some brief concerns about the [attentions and
> intentions of the
> Foundation](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T09:13:34),
>
> aggravated by the board meeting schedule conflict (there's agreement
> that will never ever happen again), and the rumor milling about the
> PTG.
>
> # Friday at the PTG with the TC
>
> The TC had 

Re: [openstack-dev] [keystone] [oslo] new unified limit library

2018-03-07 Thread Chris Friesen

On 03/07/2018 09:49 AM, Lance Bragstad wrote:



On 03/07/2018 09:31 AM, Chris Friesen wrote:

On 03/07/2018 08:58 AM, Lance Bragstad wrote:

Hi all,

Per the identity-integration track at the PTG [0], I proposed a new oslo
library for services to use for hierarchical quota enforcement [1]. Let
me know if you have any questions or concerns about the library. If the
oslo team would like, I can add an agenda item for next weeks oslo
meeting to discuss.

Thanks,

Lance

[0] https://etherpad.openstack.org/p/unified-limits-rocky-ptg


Looks interesting.

Some complications related to quotas:

1) Nova currently supports quotas for a user/group tuple that can be
stricter than the overall quotas for that group.  As far as I know no
other project supports this.

By group, do you mean keystone group? Or are you talking about the quota
associated to a project?


Sorry, typo.  I meant  quotas for a user/project tuple, which can be stricter 
than the overall quotas for that project.



2) Nova and cinder also support the ability to set the "default" quota
class (which applies to any group that hasn't overridden their
quota).  Currently once it's set there is no way to revert back to the
original defaults.

This sounds like a registered limit [0], but again, I'm not exactly sure
what "group" means in this context. It sounds like group is supposed to
be a limit for a specific project?

[0]
https://docs.openstack.org/keystone/latest/admin/identity-unified-limits.html#registered-limits


Again, should be project instead of group.  And registered limits seem 
essentially analogous.




3) Neutron allows you to list quotas for projects with non-default
quota values.  This is useful, and I'd like to see it extended to
optionally just display the non-default quota values rather than all
quota values for that project.  If we were to support user/group
quotas this would be the only way to efficiently query which
user/group tuples have non-default quotas.

This might be something we can work into the keystone implementation
since it's still marked as experimental [1]. We have two APIs, one
returns the default limits, also known as a registered limit, for a
resource and one that returns the project-specific overrides. It sounds
like you're interested in the second one?

[1]
https://developer.openstack.org/api-ref/identity/v3/index.html#unified-limits


Again, should be user/project tuples.  Yes, in this case I'm talking about the 
project-specific ones.  (It's actually worse if you support user/project limits 
since with the current nova API you can potentially get combinatorial explosion 
if many users are part of many projects.)


I think it would be useful to be able to constrain this query to report limits 
for a specific project, (and a specific user if that will be supported.)


I also think it would be useful to be able to constrain it to report only the 
limits that have been explicitly set (rather than inheriting the default from 
the project or the registered limit).  Maybe it's already intended to work this 
way--if so that should be explicitly documented.



4) In nova, keypairs belong to the user rather than the project.
(This is a bit messed up, but is the current behaviour.)  The quota
for these should really be outside of any group, or else we should
modify nova to make them belong to the project.

I think the initial implementation of a unified limit pattern is
targeting limits and quotas for things associated to projects. In the
future, we can probably expand on the limit information in keystone to
include user-specific limits, which would be great if nova wants to move
away from handling that kind of stuff.


The quota handling for keypairs is a bit messed up in nova right now, but it's 
legacy behaviour at this point.  It'd be nice to be able to get it right if 
we're switching to new quota management mechanisms.


Chris


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [oslo] new unified limit library

2018-03-07 Thread Chris Friesen

On 03/07/2018 10:33 AM, Tim Bell wrote:

Sorry, I remember more detail now... it was using the 'owner' of the VM as part 
of the policy rather than quota.

Is there a per-user/per-group quota in Nova?


Nova supports setting quotas for individual users within a project (as long as 
they are smaller than the project quota for that resource).  I'm not sure how 
much it's actually used, or if they want to get rid of it.  (Maybe melwitt can 
chime in.)  But it's there now.


As you can see at 
"https://developer.openstack.org/api-ref/compute/#update-quotas;, there's an 
optional "user_id" field in the request.  Same thing for the "delete" and 
"detailed get" operations.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-10

2018-03-07 Thread Matt Riedemann

On 3/7/2018 6:12 AM, Chris Dent wrote:

# Talking about the PTG at the PTG

At the [board
meeting](http://lists.openstack.org/pipermail/foundation/2018-March/002570.html), 


the future of the PTG was a big topic. As currently constituted it
presents some challenges:

* It is difficult for some people to attend because of visa and other
   travel related issues.
* It is expensive to run and not everyone is convinced of the return
   on investment.
* Some people don't like it (they either miss the old way of doing the
   design summit, or midcycles, or $OTHER).
* Plenty of other reasons that I'm probably not aware of.


All of this is true of the summit too isn't it?

When talking about the PTG, I always hear someone say essentially 
something like, "you know, things would be better if we did describe exactly what the old design summit format was>". It's funny how 
we seem to only remember the last 6 months of anything.




This same topic was reviewed at [yesterday's office
hours](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T09:19:32). 



For now, the next 2018 PTG is going to happen (destination unknown) but
plans for 2019 are still being discussed.

If you have opinions about the PTG, there will be an opportunity to
express them in a forthcoming survey. Beyond that, however, it is
important [that management at contributing
companies](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T22:29:24) 


hear from more people (notably their employees) than the foundation
about the value of the PTG.

My own position is that of the three different styles of in-person
events for technical contributors to OpenStack that I've experienced
(design summit, mid-cycles, PTG), the PTG is the best yet. It minimizes
distractions from other obligations (customer meetings, presentations,
marketing requirements) while maximizing cross-project interaction.


Agree.



One idea, discussed
[yesterday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T22:02:24) 


and [earlier
today](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-07.log.html#t2018-03-07T05:07:20) 


was to have the PTG be open to technical participants of any sort, not
just so-called "OpenStack developers". Make it more of a place for
people who hack on and with OpenStack to hack and talk. Leave the
summit (without a forum) for presentations, marketing, pre-sales, etc.


I don't understand why some people/organizations/groups think that they 
shouldn't attend the PTG - maybe it's something in the 'who should 
attend' docs on the website? But I hear time and again that operators 
think they shouldn't attend the PTG, but we know a few do and they are 
extremely valuable in the developer discussions for their perspective on 
how they, and other operators, run their clouds and what they want/need 
to see happen on the dev side. The silo effect between dev and ops 
communities is very weird and counter-productive IMO. And the Forum 
doesn't solve that problem really because not everyone can get funding 
to travel to the summit (Sydney, hello).


Case in point: the public cloud WG session held at the PTG on Monday 
morning where we went through the spreadsheet of missing features; I 
think I was the only full time core project developer in the room which 
was otherwise operators (CERN, OVH, City Network and Vexxhost were 
there) and it was much more productive actually having us sitting 
together going through the list and checking things off which had either 
been completed already, or were bugs instead of features, or that I 
could just say, "this depends on that and Jane Doe is working on it, so 
follow up with her" or "this is a known thing, it's been discussed, but 
it needs a driver (project manager) - so that's your next step". That 
wouldn't have been possible if the public cloud WG operators weren't 
attending the PTG.




An issue raised with conflating the PTG and the Forum is that it would
remove the
[inward/outward 
focus](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-07.log.html#t2018-03-07T08:20:17) 


concept that is supposed to distinguish the two events.

I guess it depends on how we define "we" but I've always assumed that
both events were for outward focus and that for any inward focussing
effort we ought to be able use asynchronous tools more.



I don't get the inward/outward thing. First two days of the old design 
summit (ops summit?) format was all cross-project stuff (docs, upgrades, 
testing, ops feedback, etc). That's the same as what happens at the PTG 
now too. The last three days of the old design summit (and now PTG) are 
vertical project discussion for the most part, but Thursday has also 
become a de-facto cross-project day for a lot of teams (nova/cinder, 
nova/neutron, nova/ironic all happened on Thursday). 

[openstack-dev] [vitrage] alarm and resource equivalence

2018-03-07 Thread Afek, Ifat (Nokia - IL/Kfar Sava)
Hi,

Since we need to design these days both alarm equivalence/merge [1] and 
resource equivalence/merge features, I thought it might be a good idea to start 
with a use cases document. Let’s agree on the requirements, and then see if we 
can come up with a design that matches both cases. I pushed the first draft for 
the use cases document [2], and I’ll be happy to get your comments.

[1] https://review.openstack.org/#/c/547931 
[2] https://review.openstack.org/#/c/550534 

Thanks, 
Ifat.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [keystone] [oslo] new unified limit library

2018-03-07 Thread Tim Bell
I think nested quotas would give the same thing, i.e. you have a parent project 
for the group and child projects for the users. This would not need user/group 
quotas but continue with the ‘project owns resources’ approach.

It can be generalised to other use cases like the value add partner or the 
research experiment working groups 
(http://openstack-in-production.blogspot.fr/2017/07/nested-quota-models.html)

Tim

From: Zhipeng Huang 
Reply-To: "openstack-s...@lists.openstack.org" 

Date: Wednesday, 7 March 2018 at 17:37
To: "OpenStack Development Mailing List (not for usage questions)" 
, openstack-operators 
, "openstack-s...@lists.openstack.org" 

Subject: Re: [Openstack-sigs] [openstack-dev] [keystone] [oslo] new unified 
limit library

This is certainly a feature will make Public Cloud providers very happy :)

On Thu, Mar 8, 2018 at 12:33 AM, Tim Bell 
> wrote:
Sorry, I remember more detail now... it was using the 'owner' of the VM as part 
of the policy rather than quota.

Is there a per-user/per-group quota in Nova?

Tim

-Original Message-
From: Tim Bell >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, 7 March 2018 at 17:29
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [keystone] [oslo] new unified limit library


There was discussion that Nova would deprecate the user quota feature since 
it really didn't fit well with the 'projects own resources' approach and was 
little used. At one point, some of the functionality stopped working and was 
repaired. The use case we had identified goes away if you have 2 level deep 
nested quotas (and we have now worked around it).

Tim
-Original Message-
From: Lance Bragstad >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Wednesday, 7 March 2018 at 16:51
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: Re: [openstack-dev] [keystone] [oslo] new unified limit library



On 03/07/2018 09:31 AM, Chris Friesen wrote:
> On 03/07/2018 08:58 AM, Lance Bragstad wrote:
>> Hi all,
>>
]
>
> 1) Nova currently supports quotas for a user/group tuple that can be
> stricter than the overall quotas for that group.  As far as I know no
> other project supports this.
...
I think the initial implementation of a unified limit pattern is
targeting limits and quotas for things associated to projects. In the
future, we can probably expand on the limit information in keystone to
include user-specific limits, which would be great if nova wants to move
away from handling that kind of stuff.
>
> Chris
>
> 
__
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, 

Re: [openstack-dev] [keystone] [oslo] new unified limit library

2018-03-07 Thread Zhipeng Huang
This is certainly a feature will make Public Cloud providers very happy :)

On Thu, Mar 8, 2018 at 12:33 AM, Tim Bell  wrote:

> Sorry, I remember more detail now... it was using the 'owner' of the VM as
> part of the policy rather than quota.
>
> Is there a per-user/per-group quota in Nova?
>
> Tim
>
> -Original Message-
> From: Tim Bell 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Wednesday, 7 March 2018 at 17:29
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [keystone] [oslo] new unified limit library
>
>
> There was discussion that Nova would deprecate the user quota feature
> since it really didn't fit well with the 'projects own resources' approach
> and was little used. At one point, some of the functionality stopped
> working and was repaired. The use case we had identified goes away if you
> have 2 level deep nested quotas (and we have now worked around it).
>
> Tim
> -Original Message-
> From: Lance Bragstad 
> Reply-To: "OpenStack Development Mailing List (not for usage
> questions)" 
> Date: Wednesday, 7 March 2018 at 16:51
> To: "openstack-dev@lists.openstack.org"  openstack.org>
> Subject: Re: [openstack-dev] [keystone] [oslo] new unified limit
> library
>
>
>
> On 03/07/2018 09:31 AM, Chris Friesen wrote:
> > On 03/07/2018 08:58 AM, Lance Bragstad wrote:
> >> Hi all,
> >>
> ]
> >
> > 1) Nova currently supports quotas for a user/group tuple that
> can be
> > stricter than the overall quotas for that group.  As far as I
> know no
> > other project supports this.
> ...
> I think the initial implementation of a unified limit pattern is
> targeting limits and quotas for things associated to projects. In
> the
> future, we can probably expand on the limit information in
> keystone to
> include user-specific limits, which would be great if nova wants
> to move
> away from handling that kind of stuff.
> >
> > Chris
> >
> > 
> __
> >
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack-dev
>
>
>
>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [oslo] new unified limit library

2018-03-07 Thread Tim Bell
Sorry, I remember more detail now... it was using the 'owner' of the VM as part 
of the policy rather than quota.

Is there a per-user/per-group quota in Nova?

Tim

-Original Message-
From: Tim Bell 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 7 March 2018 at 17:29
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [keystone] [oslo] new unified limit library


There was discussion that Nova would deprecate the user quota feature since 
it really didn't fit well with the 'projects own resources' approach and was 
little used. At one point, some of the functionality stopped working and was 
repaired. The use case we had identified goes away if you have 2 level deep 
nested quotas (and we have now worked around it). 

Tim
-Original Message-
From: Lance Bragstad 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 7 March 2018 at 16:51
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [keystone] [oslo] new unified limit library



On 03/07/2018 09:31 AM, Chris Friesen wrote:
> On 03/07/2018 08:58 AM, Lance Bragstad wrote:
>> Hi all,
>>
]
>
> 1) Nova currently supports quotas for a user/group tuple that can be
> stricter than the overall quotas for that group.  As far as I know no
> other project supports this.
...
I think the initial implementation of a unified limit pattern is
targeting limits and quotas for things associated to projects. In the
future, we can probably expand on the limit information in keystone to
include user-specific limits, which would be great if nova wants to move
away from handling that kind of stuff.
>
> Chris
>
> 
__
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [oslo] new unified limit library

2018-03-07 Thread Tim Bell

There was discussion that Nova would deprecate the user quota feature since it 
really didn't fit well with the 'projects own resources' approach and was 
little used. At one point, some of the functionality stopped working and was 
repaired. The use case we had identified goes away if you have 2 level deep 
nested quotas (and we have now worked around it). 

Tim
-Original Message-
From: Lance Bragstad 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 7 March 2018 at 16:51
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [keystone] [oslo] new unified limit library



On 03/07/2018 09:31 AM, Chris Friesen wrote:
> On 03/07/2018 08:58 AM, Lance Bragstad wrote:
>> Hi all,
>>
]
>
> 1) Nova currently supports quotas for a user/group tuple that can be
> stricter than the overall quotas for that group.  As far as I know no
> other project supports this.
...
I think the initial implementation of a unified limit pattern is
targeting limits and quotas for things associated to projects. In the
future, we can probably expand on the limit information in keystone to
include user-specific limits, which would be great if nova wants to move
away from handling that kind of stuff.
>
> Chris
>
> __
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.db] oslo_db "max_retries" option

2018-03-07 Thread Ben Nemec



On 02/27/2018 11:55 PM, Vitalii Solodilov wrote:

Hi folks!

I have a question about oslo_db "max_retries" option.
https://github.com/openstack/oslo.db/blob/master/oslo_db/sqlalchemy/engines.py#L381
Why only DBConnectionError is considered as a reason for reconnecting here?
Wouldn't it be a good idea to check for more general DBError?
For example, DB host is down at the time of engine creation, but will become 
running some time later.


That sounds like it would result in a DBConnectionError since we would 
be unable to connect.  Is that not the case, and if so what exception is 
raised instead?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [oslo] new unified limit library

2018-03-07 Thread Lance Bragstad


On 03/07/2018 09:31 AM, Chris Friesen wrote:
> On 03/07/2018 08:58 AM, Lance Bragstad wrote:
>> Hi all,
>>
>> Per the identity-integration track at the PTG [0], I proposed a new oslo
>> library for services to use for hierarchical quota enforcement [1]. Let
>> me know if you have any questions or concerns about the library. If the
>> oslo team would like, I can add an agenda item for next weeks oslo
>> meeting to discuss.
>>
>> Thanks,
>>
>> Lance
>>
>> [0] https://etherpad.openstack.org/p/unified-limits-rocky-ptg
>
> Looks interesting.
>
> Some complications related to quotas:
>
> 1) Nova currently supports quotas for a user/group tuple that can be
> stricter than the overall quotas for that group.  As far as I know no
> other project supports this.
By group, do you mean keystone group? Or are you talking about the quota
associated to a project?
>
> 2) Nova and cinder also support the ability to set the "default" quota
> class (which applies to any group that hasn't overridden their
> quota).  Currently once it's set there is no way to revert back to the
> original defaults.
This sounds like a registered limit [0], but again, I'm not exactly sure
what "group" means in this context. It sounds like group is supposed to
be a limit for a specific project?

[0]
https://docs.openstack.org/keystone/latest/admin/identity-unified-limits.html#registered-limits
>
> 3) Neutron allows you to list quotas for projects with non-default
> quota values.  This is useful, and I'd like to see it extended to
> optionally just display the non-default quota values rather than all
> quota values for that project.  If we were to support user/group
> quotas this would be the only way to efficiently query which
> user/group tuples have non-default quotas.
This might be something we can work into the keystone implementation
since it's still marked as experimental [1]. We have two APIs, one
returns the default limits, also known as a registered limit, for a
resource and one that returns the project-specific overrides. It sounds
like you're interested in the second one?

[1]
https://developer.openstack.org/api-ref/identity/v3/index.html#unified-limits
>
> 4) In nova, keypairs belong to the user rather than the project. 
> (This is a bit messed up, but is the current behaviour.)  The quota
> for these should really be outside of any group, or else we should
> modify nova to make them belong to the project.
I think the initial implementation of a unified limit pattern is
targeting limits and quotas for things associated to projects. In the
future, we can probably expand on the limit information in keystone to
include user-specific limits, which would be great if nova wants to move
away from handling that kind of stuff.
>
> Chris
>
> __
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [oslo] new unified limit library

2018-03-07 Thread Chris Friesen

On 03/07/2018 08:58 AM, Lance Bragstad wrote:

Hi all,

Per the identity-integration track at the PTG [0], I proposed a new oslo
library for services to use for hierarchical quota enforcement [1]. Let
me know if you have any questions or concerns about the library. If the
oslo team would like, I can add an agenda item for next weeks oslo
meeting to discuss.

Thanks,

Lance

[0] https://etherpad.openstack.org/p/unified-limits-rocky-ptg


Looks interesting.

Some complications related to quotas:

1) Nova currently supports quotas for a user/group tuple that can be stricter 
than the overall quotas for that group.  As far as I know no other project 
supports this.


2) Nova and cinder also support the ability to set the "default" quota class 
(which applies to any group that hasn't overridden their quota).  Currently once 
it's set there is no way to revert back to the original defaults.


3) Neutron allows you to list quotas for projects with non-default quota values. 
 This is useful, and I'd like to see it extended to optionally just display the 
non-default quota values rather than all quota values for that project.  If we 
were to support user/group quotas this would be the only way to efficiently 
query which user/group tuples have non-default quotas.


4) In nova, keypairs belong to the user rather than the project.  (This is a bit 
messed up, but is the current behaviour.)  The quota for these should really be 
outside of any group, or else we should modify nova to make them belong to the 
project.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] [oslo] new unified limit library

2018-03-07 Thread Lance Bragstad
Hi all,

Per the identity-integration track at the PTG [0], I proposed a new oslo
library for services to use for hierarchical quota enforcement [1]. Let
me know if you have any questions or concerns about the library. If the
oslo team would like, I can add an agenda item for next weeks oslo
meeting to discuss.

Thanks,

Lance

[0] https://etherpad.openstack.org/p/unified-limits-rocky-ptg
[1] https://review.openstack.org/#/c/550491/




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ptg] Release cycles vs. downstream consuming models discussion summary

2018-03-07 Thread Thierry Carrez
Hi everyone,

On Tuesday afternoon of the PTG week we had a track of discussions to
brainstorm how to better align our release cycle and stable branch
maintenance with the OpenStack downstream consumption models.

You can find the notes at:
https://etherpad.openstack.org/p/release-cycles-ptg-rocky

TL;DR: summary:
* No consensus on longer / shorter release cycles
* Focus on FFU to make upgrades less painful
* Longer stable branch maintenance time (18 months for Ocata)
* Bootstrap the "extended maintenance" concept with common policy
* Group most impacted by release cadence are packagers/distros/vendors
* Need for finer user survey questions on upgrade models
* Need more data and more discussion, next discussion at Vancouver forum
* Release Management team tracks it between events

Details:

We started the discussion by establishing a taxonomy of consumption
models and upgrade patterns. This exercise showed that we are lacking
good data on how many people follow which. The user survey asks what
people are using to deploy and what they are running, but the questions
are a bit too simple (some deploys have a mix of versions, what should
they answer) or incomplete (some deployment mechanisms combine
high-level and low-level packaging). It also misses the question of the
upgrade pattern completely. I took the action of circling back with the
user survey folks to see if we could enrich (or add to) those questions
in the future.

Another point of data, Swift seems to be the only component with an
established pattern of upgrading at every intermediary release (as
opposed to random points on master, or every final release). It is
probably due to it being consumed standalone more than others.

We continued by discussing upgrade motivation and upgrade issues. A lot
of participants reported on keeping current so that you don't put
yourself in a corner having an impossible upgrade in the future.
Otherwise not much surprise there.

The bulk of the discussion was around the impact of the release cadence.
The most obvious user impact (pressure to upgrade) would be mostly
covered by the work being done on fast-forward upgrades. Once that is a
proven model of upgrading OpenStack, the cadence of release stops being
a problem to become an asset (more choice has to where you fast-forward
to). The other big user impact (support ending too early) would be
mostly covered by the work being done to extend maintenance on stable
branches. Again, the cadence of release is actually not the real cause
of the pain felt there, and there is already work in progress to
directly address the issue.

That said, the release cadence definitely has cost for people working
downstream from the OpenStack software release. Release marketing, and
the community-generated roadmap are both examples of per-release work.
We need to work on ways to make releases more business as usual and less
of an exceptional event there.

At this point the groups the most impacted by the release cadence are
those working on packaging OpenStack releases, either as part of the
open source project (OpenStackAnsible, tripleO...) or as part of a
vendor product. It can be a lot of work to do the
packaging/integration/test/certification work and releasing more often
means that this work needs to be done more often. It is difficult for
those to "skip" a release since users are generally asking for the
latest features to be made available.

We have also traditionally tied a number of other things to the release
cadence: COA, events, elections. That said nothing forces us to really
tie those one-for-one, although to keep our sanity we'd likely want to
keep one a multiple of the other.

Overall, the discussion on cadence concluded that ongoing work on
fast-forward upgrades and longer stable branch maintenance would
alleviate 80% of the release cadence pain with none of the drawbacks of
releasing less often, and therefore we should focus our efforts on that
for the moment.

The topic of discussion then switched to discussing stable branch
maintenance and LTS in more detail. The work done on tracking
upper-constraints finally paid off, with stable branches now breaking
less often. The stable team is therefore comfortable extending the life
of Ocata for 6 more months (for a total of 18 months).

This should make Ocata the first candidate for "extended maintenance" (a
new name for "LTS" that does not imply anyone providing "support").
Extended maintenance, as discussed by the group, would be about letting
branches open for as long as there is someone caring for them (and close
them once they are broken or abandoned). This inverts the current
chicken and egg resource issue on stable maintenance: we should
establish the concept and once it exists hopefully interested parties
will come. We discussed the need for a common policy around those
branches (like "no feature backport") so that there is still some
consistency. mriedem volunteered to work on a TC resolution to define
what we 

[openstack-dev] [nova] [placement] Notes on eventually extracting placement

2018-03-07 Thread Chris Dent


At the PTG we decided that while it was unlikely we could manage
extracting Placement to its own project during Rocky, it would be
useful to make incremental progress in that direction so the ground is
prepared for when we do get around to it. This means making sure there
are clear boundaries between what is "placement code" and what is
"nova code", limiting imports of "nova code" into "placement code",
and keeping (or moving) "placement code" under a single directory so
that an eventual lift and shift to a new repo can maintain history[1].

The placement etherpad for rocky has some additional info:
https://etherpad.openstack.org/p/nova-ptg-rocky-placement

I've already done a fair amount of experimentation around these ideas,
resulting in some blog posts [2] and some active reviews [3]. There's
a mix in those reviews of work to consolidate placement, and work to
make sure that while placement still exists in the nova hierarchy it
doesn't import nova code it doesn't want to use.

This leaves plenty of other things that need to happen:

* Migration strategies need to be determined, mostly for data, but
  also in general. It may be best to simply document the options and
  let people do what they like. One option is to simply carry on using
  the nova_api db, but this presents eventual problems for schema
  adjustments [4].

* There are functional tests which currently live at functional/db
  which tests the persistence layer handling the placement-related
  objects. These should probably move under
  functional/api/openstack/placement

* There are functional tests at api/openstack/placement that tests the
  scheduler report client (put there initially because they run the
  placement service using wsgi-intercept). These should be moved
  elsewhere.

* Resource class fields are used by both nova and placement (and
  eventually other things) so should probably get the same treatment
  as os-traits [5], so we need an os-resource-classes and adjustments
  in both placement and nova to use it. In the meantime, a pending
  patch [6] puts those fields at the top of nova. Switching to
  os-resource-classes will also allow us to remove the resource class
  cache, which is confusing to manage during this transition.

* We should experiment with strategies for how nova will do testing
  when placement is no longer in-repo. It should (dangerous word) be
  possible for placement to provide (or for nova to create) a fixture
  which is a wsgi-intercepted placement service with a real datastore
  (which is what is done now, but in-tree) but this is not something
  we traditionally do in functional tests, so it may be important to
  start migrating some functional tests (e.g., the stuff in
  test_servers) to integration (which could still be in nova's tree).

* Eventually the work of creating a new repo, establishing status as
  an official project, setting up translation handling, and creating a
  core team will need to happen, but that can be put off until a time
  when we are actually doing the extraction.

* All the things I'm forgetting. There's plenty.

As stated at the PTG these are not things I can complete by myself
(especially the things I'm forgetting).  Volunteers are welcome and
encouraged for the stuff above. Good first steps are reading the blog
posts linked below, and reviewing the patches linked below. This will
establish some of the issues and reveal things I'm forgetting.

Thanks to everyone who has provided feedback on this stuff, either at
the PTG, on the reviews and blogs posts, or elsewhere. Even though we
can't magically do the extraction _right now_ the process of
experimentation and refactoring is improving placement in place and
setting the foundation for doing it later.

The footnotes:

[1] Some incantations with 'git filter-branch' ought to allow this.

[2] Placement extraction related blog posts:
* A series on placement in a container (which helps identify
  boundaries):
  * https://anticdent.org/placement-container-playground.html
  * https://anticdent.org/placement-container-playground-2.html
  * https://anticdent.org/placement-container-playground-3.html
* Notes on extraction: https://anticdent.org/placement-extraction.html
* Notes on scale (which also helps to identify boundaires):
  https://anticdent.org/placement-scale-fun.html

[3] * Using a simplified, non-nova, FaultWrapper wsgi middleware:
  https://review.openstack.org/533752

* Moving object, database and exception handing into the placement
  hierarchy:
  https://review.openstack.org/#/c/540049/

* Refactoring wsgi-related modules to limit nova's presence in
  placement during the transition:
  https://review.openstack.org/#/c/533797/

* Cleaning up db imports so that import database models doesn't
  import a big pile of nova:
  https://review.openstack.org/#/c/533797/

[4] We keep coming up with reasons to change the schema. The latest is
adding generations to consumers.

[5] 

Re: [openstack-dev] [Interop-wg] [QA] [PTG] [Interop] [Designate] [Heat] [TC]: QA PTG Summary- Interop test for adds-on project

2018-03-07 Thread Ghanshyam Mann
On Wed, Mar 7, 2018 at 10:15 PM, Andrea Frittoli 
wrote:

>
>
> On Wed, Mar 7, 2018 at 12:42 PM Ghanshyam Mann 
> wrote:
>
>>  Hi All,
>>
>> QA had discussion in Dublin PTG about interop adds-on tests location.
>> First of all thanks all (specially markvoelker, dhellmann, mugsie) for
>> joining the sessions. and I am glad we conclude the things and agreed on
>> solution.
>>
>> Discussion was carry forward from the ML discussion [1] and to get the
>> agreement about interop adds-on program tests location.
>>
>> Till now only 2 projects (heat and designate) are in list of adds-on
>> program from interop side. After discussion and points from all stack
>> holders, QA team agreed to host these 2 projects interop tests.  Tests from
>> both projects are not much as of now and QA team can accommodate to host
>> their interop tests.
>>
>> Along with that agreement we had few more technical points to consider
>> while moving designate and heat interop tests in Tempest repo. All the
>> interop tests going to be added in Tempest must to be Tempest like tests.
>> Tempest like tests here means tests written using Tempest interfaces and
>> guidelines. For example, heat has their tests in heat-tempest-plugin based
>> on gabbi and to move heat interop tests to Tempest those have to be written
>> as Tempest like test. This is because if we accept non-tempest like tests
>> in Tempest then, it will be too difficult to maintain by Tempest team.
>>
>> Projects (designate and heat) and QA team will work closely to move
>> interop tests to Tempest repo which might needs some extra work of
>> standardizing their tests and interface used by them like service clients
>> etc.
>>
>> In future, if there are more new interop adds-on program proposal then,
>> we need to analyse the situation again regarding QA team bandwidth. TC or
>> QA or interop team needs to raise the resource requirement to Board of
>> Directors before any more new adds-on program is being proposed. If QA team
>> has less resource and less review bandwitdh then we cannot accept the more
>> interop programs till QA get more resource to maintain new interop tests.
>>
>> Overall Summary:
>> - QA team agreed to host the interop tests for heat and designate in
>> Tempest repo.
>> - Existing TC resolution needs to be adjust about the QA team resource
>> bandwidth requirement. If there is going to be more adds-on program
>> proposal then, QA team will not accept the new interop tests if QA team
>> bandwidth issue still exist that time also.
>> - Tempest will document the clear process about interop tests addition
>> and other more care items etc.
>> - Projects team to make their tests and interface as Tempest like tests
>> and stable interfaces standards. Tempest team will closely work and help
>> Designate and Heat on this.
>>
>> Thanks for the summary Ghanshyam!
>
> We had some follow up discussion on Friday about this, after the Heat team
> expressed their concern about proceeding with the plan we discussed during
> the session on Wednesday.
> A group of representatives of the Heat, Designate and Interop teams met
> with the TC and agreed on reviving the resolution started by mugsie in
> https://review.openstack.org/#/c/521602 to add an alternative to hosting
> tests in the Tempest repo. Unfortunately I was only there for the last few
> minutes of the meeting, but I understand that the proposal drafted there
> was to allow team to have interop-specific Tempest plugins
> ​​
> co-owned by QA/Interop/add-on project team. mugsie has updated the
> resolution accordingly and I think the discussion on that can continue in
> gerrit directly.
>

​Thanks for pointing that. I feel
​
co-owned is not solving any issue here and i am little worried if that
makes things more difficult on controlling the tests. If tests are not
tempest like tests then, it is little difficult for QA team to control or
input. and if it is also owned by project then how it make sure to control
the test modification by non-project team. I mean i am all ok with separate
plugin which is more easy for QA team but ownership to QA is kind of going
to same direction(QA team maintaining interop ads-on tests) in more
difficult way.
I will check and add my points on gerrit.
​

>
> Just to clarify, nothing has been decided yet, but at least the new
> proposal was received positively by all parties involved in the discussion
> on Friday.
>
> Action Items:
>> - mugsie to abandon https://review.openstack.org/#/c/521602 with quick
>> summary of discussion here at PTG
>>
> This is not valid anymore, we should discuss this further and hopefully
> reach an agreement.
>
>
>> - markvoelker to write up clarification to InteropWG process stating that
>> tests should be moved into Tempest before being proposed to the BoD
>> - markvoelker to work with gmann before next InteropWG+BoD discussion to
>> frame up a note about resourcing testing for add-on/vertical programs
>> - 

[openstack-dev] [PTG][QA] QA PTG Rocky Summary

2018-03-07 Thread Ghanshyam Mann
Hi All,

First of all, thanks for joining Rocky PTG and making it really productive
and successful. I am writing the QA PTG summary. Wwe started the 'owner'
for each working item so that we have single point of contact to track
those. That will help to make each priority item to complete on time.

1. Queens Retrospective
-
We discussed the Queens Retrospective at start the PTG. We went through 1.
what went well and 2. what needs to improve and gather some concrete action
items.

Action Items:
- chandankumar: Use newly tempest plugin jobs for stable branches for other
projects.
- felipemonteiro: stable branches jobs needs to be done for Patrole for
in-repo zuul gate.
- masayukig: Mail to ML to abandon no active patches, then put a comment
and record the list, then abandon them
- gmann: will start the some etherpad and ML to notify the projects using
plugins for best practice and improve the interfaces used by them.
- gmann: will check with SamP on progress on destructive HA testing.
- mguiney: will start to put some unit tests for CLIs

We will be tracking the above AI in our QA meeting so that we really work
on mentioned AI and do not forget these as PTG finished.
Owner: gmann
Etherpad link: https://etherpad.openstack.org/p/qa-queens-retrospective


2. Zuul v3 native jobs
---
andreaf explained about the devstack and tempest base jobs and migration of
jobs. That was really helpful and good learning sessions. Basic idea to
finish the devstack and tempest base jobs to make them available for
projects specific jobs.

We decided to have only 2 devstack base jobs, 1. base abstract job 2. base
job for single and multinode jobs. Inherited jobs can adjust single and
multinode setup with nodeset var.

Action Items:
- andreaf to merge current hierarchy of devstack jobs to make a single
job for single node and multinode jobs.
Owner: andreaf
Etherpad link:
https://etherpad.openstack.org/p/qa-rocky-ptg-zuul-v3-native-jobs


3. Cold upgrades capabilities (Rocky community goal)
---
This is not Rocky goal now but we did talk on this little bit. We discussed
about preparation for grenade plugins developments. Masayuki will check
whether we have enough documentation and process written for implementing
the plugins.

Action Items:
- masayukig to check the current documentation about grenade plugins
whether it is enough for projects to implement plugins.
Owner: masayukig
Etherpad link:
https://etherpad.openstack.org/p/qa-rocky-ptg-cold-upgrades-capabilities


4. Interop test for adds-on project
-
I sent separate detailed mail on this and outcomes with action items.
please refer  -
http://lists.openstack.org/pipermail/openstack-dev/2018-March/127994.html


5. Remove Deprecated APIs tests from Tempest
-
We talked about the testing of Deprecated APIs in tempest and on stable
branches. We concluded to tests all Deprecated APIs on master as well on
all stable branch. Till they are being removed we should tests APIs even
they are in any state. There are few APIs like glance v1 and keystone admin
v2 which are being skipped now So we are going to enable these APIs tests
on each corresponding stable and master branch.

Volume APIs testing will be little different way. We will tests v3 as
default in all jobs with all existing tests. v2 APIs will be tested with
new job running all current tests with v2 endpoints on tempest and cinder
CI.

Action Items:
- gmann to make all glance v2 tests to run on all job
- gmann to make keystone v2 admin tests in all jobs
- gmann to make volume tests testing v3 as default and setup  v2 new
job on tempest and cinder
Owner: gmann
Etherpad link:
https://etherpad.openstack.org/p/qa-rocky-ptg-remove-deprecated-apis-tests


6. Backlogs from Queens Cycle
-
We went through the backlogs items of queens release which we discussed in
Denver PTG but did not completed.
We picked up the items which we still wanted to do but need volunteers to
take those items. I will publish those items to ML and find out some
volunteer if I can.

Action Items:
- gmann to send the backlogs items to ML to find the volunteers.
Owner: Need Volunteer to pickup the items
Etherpad link: https://etherpad.openstack.org/p/qa-rocky-ptg-queens-backlogs



7. Consuming Kolla tempest container source image in CI
-
The tempest kolla image contains tempest as well as container. We can use
that to tests image and process of creating the image.
For that we can add a job on tempest and kolla CI to use of kolla Tempest
image and run few or more tempest test.


Re: [openstack-dev] [Nova] [Cyborg] Tracking multiple functions

2018-03-07 Thread Jay Pipes

On 03/06/2018 09:36 PM, Alex Xu wrote:
2018-03-07 10:21 GMT+08:00 Alex Xu >:




2018-03-06 22:45 GMT+08:00 Mooney, Sean K >:

__ __

__ __

*From:*Matthew Booth [mailto:mbo...@redhat.com
]
*Sent:* Saturday, March 3, 2018 4:15 PM
*To:* OpenStack Development Mailing List (not for usage
questions) >
*Subject:* Re: [openstack-dev] [Nova] [Cyborg] Tracking multiple
functions

__ __

On 2 March 2018 at 14:31, Jay Pipes > wrote:

On 03/02/2018 02:00 PM, Nadathur, Sundar wrote:

Hello Nova team,

  During the Cyborg discussion at Rocky PTG, we
proposed a flow for FPGAs wherein the request spec asks
for a device type as a resource class, and optionally a
function (such as encryption) in the extra specs. This
does not seem to work well for the usage model that I’ll
describe below.

An FPGA device may implement more than one function. For
example, it may implement both compression and
encryption. Say a cluster has 10 devices of device type
X, and each of them is programmed to offer 2 instances
of function A and 4 instances of function B. More
specifically, the device may implement 6 PCI functions,
with 2 of them tied to function A, and the other 4 tied
to function B. So, we could have 6 separate instances
accessing functions on the same device.

__ __

Does this imply that Cyborg can't reprogram the FPGA at all?

*/[Mooney, Sean K] cyborg is intended to support fixed function
acclerators also so it will not always be able to program the
accelerator. In this case where an fpga is preprogramed with a
multi function bitstream that is statically provisioned cyborge
will not be able to reprogram the slot if any of the fuctions
from that slot are already allocated to an instance. In this
case it will have to treat it like a fixed function device and
simply allocate a unused  vf  of the corret type if available.
/*





In the current flow, the device type X is modeled as a
resource class, so Placement will count how many of them
are in use. A flavor for ‘RC device-type-X + function A’
will consume one instance of the RC device-type-X.  But
this is not right because this precludes other functions
on the same device instance from getting used.

One way to solve this is to declare functions A and B as
resource classes themselves and have the flavor request
the function RC. Placement will then correctly count the
function instances. However, there is still a problem:
if the requested function A is not available, Placement
will return an empty list of RPs, but we need some way
to reprogram some device to create an instance of
function A.


Clearly, nova is not going to be reprogramming devices with
an instance of a particular function.

Cyborg might need to have a separate agent that listens to
the nova notifications queue and upon seeing an event that
indicates a failed build due to lack of resources, then
Cyborg can try and reprogram a device and then try
rebuilding the original request.

__ __

It was my understanding from that discussion that we intend to
insert Cyborg into the spawn workflow for device configuration
in the same way that we currently insert resources provided by
Cinder and Neutron. So while Nova won't be reprogramming a
device, it will be calling out to Cyborg to reprogram a device,
and waiting while that happens.

My understanding is (and I concede some areas are a little
hazy):

* The flavors says device type X with function Y

* Placement tells us everywhere with device type X

* A weigher orders these by devices which already have an
available function Y (where is this metadata stored?)

* Nova schedules to host Z

* Nova host Z asks cyborg for a local function Y and blocks

   * Cyborg hopefully returns function Y which is already
available

   * If not, Cyborg reprograms 

Re: [openstack-dev] Fwd: [Release-job-failures] Release of openstack/paunch failed

2018-03-07 Thread Jeremy Stanley
On 2018-03-07 00:55:15 -0600 (-0600), Sean McGinnis wrote:
[...]
> When someone from infra gets a chance, would you be able to
> reenqueue this job?

I'm just now catching up on E-mail, but I reenqueued this tag at
11:55 UTC after Thierry brought it to my attention in the
#openstack-release channel.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Interop-wg] [QA] [PTG] [Interop] [Designate] [Heat] [TC]: QA PTG Summary- Interop test for adds-on project

2018-03-07 Thread Andrea Frittoli
On Wed, Mar 7, 2018 at 12:42 PM Ghanshyam Mann 
wrote:

>  Hi All,
>
> QA had discussion in Dublin PTG about interop adds-on tests location.
> First of all thanks all (specially markvoelker, dhellmann, mugsie) for
> joining the sessions. and I am glad we conclude the things and agreed on
> solution.
>
> Discussion was carry forward from the ML discussion [1] and to get the
> agreement about interop adds-on program tests location.
>
> Till now only 2 projects (heat and designate) are in list of adds-on
> program from interop side. After discussion and points from all stack
> holders, QA team agreed to host these 2 projects interop tests.  Tests from
> both projects are not much as of now and QA team can accommodate to host
> their interop tests.
>
> Along with that agreement we had few more technical points to consider
> while moving designate and heat interop tests in Tempest repo. All the
> interop tests going to be added in Tempest must to be Tempest like tests.
> Tempest like tests here means tests written using Tempest interfaces and
> guidelines. For example, heat has their tests in heat-tempest-plugin based
> on gabbi and to move heat interop tests to Tempest those have to be written
> as Tempest like test. This is because if we accept non-tempest like tests
> in Tempest then, it will be too difficult to maintain by Tempest team.
>
> Projects (designate and heat) and QA team will work closely to move
> interop tests to Tempest repo which might needs some extra work of
> standardizing their tests and interface used by them like service clients
> etc.
>
> In future, if there are more new interop adds-on program proposal then, we
> need to analyse the situation again regarding QA team bandwidth. TC or QA
> or interop team needs to raise the resource requirement to Board of
> Directors before any more new adds-on program is being proposed. If QA team
> has less resource and less review bandwitdh then we cannot accept the more
> interop programs till QA get more resource to maintain new interop tests.
>
> Overall Summary:
> - QA team agreed to host the interop tests for heat and designate in
> Tempest repo.
> - Existing TC resolution needs to be adjust about the QA team resource
> bandwidth requirement. If there is going to be more adds-on program
> proposal then, QA team will not accept the new interop tests if QA team
> bandwidth issue still exist that time also.
> - Tempest will document the clear process about interop tests addition and
> other more care items etc.
> - Projects team to make their tests and interface as Tempest like tests
> and stable interfaces standards. Tempest team will closely work and help
> Designate and Heat on this.
>
> Thanks for the summary Ghanshyam!

We had some follow up discussion on Friday about this, after the Heat team
expressed their concern about proceeding with the plan we discussed during
the session on Wednesday.
A group of representatives of the Heat, Designate and Interop teams met
with the TC and agreed on reviving the resolution started by mugsie in
https://review.openstack.org/#/c/521602 to add an alternative to hosting
tests in the Tempest repo. Unfortunately I was only there for the last few
minutes of the meeting, but I understand that the proposal drafted there
was to allow team to have interop-specific Tempest plugins co-owned by
QA/Interop/add-on project team. mugsie has updated the resolution
accordingly and I think the discussion on that can continue in gerrit
directly.

Just to clarify, nothing has been decided yet, but at least the new
proposal was received positively by all parties involved in the discussion
on Friday.

Action Items:
> - mugsie to abandon https://review.openstack.org/#/c/521602 with quick
> summary of discussion here at PTG
>
This is not valid anymore, we should discuss this further and hopefully
reach an agreement.


> - markvoelker to write up clarification to InteropWG process stating that
> tests should be moved into Tempest before being proposed to the BoD
> - markvoelker to work with gmann before next InteropWG+BoD discussion to
> frame up a note about resourcing testing for add-on/vertical programs
> - dhellmann to adjust the TC resolution for resource requirement in QA
> when new adds-on program is being proposed
> - project teams to convert  interop test and  framework as per tempest
> like tests and propose to add to tempest repo.
>
If the new resolution is agreed on, this will become one of the options.


> - gmann to define process in QA about interop tests addition and
> maintainance
>
This is still an option so you may still want to do it.

Andrea Frittoli (andreaf)

>
> We have added this as one of the monitoring/helping item for QA to make
> sure it is done without delay.  Let's work together to finish this
> activity.
>
> Discussion Details:
> https://etherpad.openstack.org/p/qa-rocky-ptg-Interop-test-for-adds-on-project
>
> ..1
> 

Re: [openstack-dev] [horizon][ptg] Horizon PTG Highlights

2018-03-07 Thread Jeremy Stanley
On 2018-03-07 00:08:39 +0200 (+0200), Ivan Kolodyazhny wrote:
[...]
> - we agreed to make go forward with Eventlet by default and make it
>configurable to allow native Python threads which are used now
>- let's ask the community about their experience with Eventlet
>- Eventlet is not the best option for Python 3 at the moment
[...]

There was a discussion[*] during TC office hours three weeks ago
wherein we rehashed a general desire to see eventlet usage decline
within OpenStack services (we recognize that the volunteer workforce
needed to rearchitect existing eventlet-using services simply
doesn't exist, though it was suggested in jest as a potential
community goal). At a minimum, there seemed to be some consensus
that we should strongly discourage new uses of eventlet because its
stdlib monkey-patching has created all manner of incompatibilities
with other libraries in the past. Most recently it seems to be
hampering etcd adoption, which we had as a community previously
agreed on using to provide a consistent DLM implementation across
projects.

[*] 
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-02-15.log.html#t2018-02-15T15:12:44

-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] [PTG] [Interop] [Designate] [Heat] [TC]: QA PTG Summary- Interop test for adds-on project

2018-03-07 Thread Chris Dent

On Wed, 7 Mar 2018, Rabi Mishra wrote:


Projects (designate and heat) and QA team will work closely to move
interop tests to Tempest repo which might needs some extra work of
standardizing their tests and interface used by them like service clients
etc.



Though I've not been part of any of these discussions, this seems to be
exactly opposite to what I've been made to understand by the team i.e. Heat
is not rewriting the gabbi api tests used by Trademark program, but would
create a new tempest plugin (new repo
'orchestration-trademark-tempest-plugin') to host the heat related tests
that are currently candidates for Trademark program?


There was additional discussion on Friday with people from the TC,
trademark program, heat and QA that resulted in the plan you
describe, which is being codified at:
https://review.openstack.org/#/c/521602/

--
Chris Dent  (⊙_⊙') https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs] Documentation meeting canceled

2018-03-07 Thread Petr Kovar
Hi all,

Canceling today's docs meeting as there is not much to share beyond what
was in the PTG summary I sent. 

As always, we're in #openstack-doc if you want to talk to us!

Thanks,
pk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] [PTG] [Interop] [Designate] [Heat] [TC]: QA PTG Summary- Interop test for adds-on project

2018-03-07 Thread Rabi Mishra
On Wed, Mar 7, 2018 at 6:10 PM, Ghanshyam Mann 
wrote:

>  Hi All,
>
> QA had discussion in Dublin PTG about interop adds-on tests location.
> First of all thanks all (specially markvoelker, dhellmann, mugsie) for
> joining the sessions. and I am glad we conclude the things and agreed on
> solution.
>
> Discussion was carry forward from the ML discussion [1] and to get the
> agreement about interop adds-on program tests location.
>
> Till now only 2 projects (heat and designate) are in list of adds-on
> program from interop side. After discussion and points from all stack
> holders, QA team agreed to host these 2 projects interop tests.  Tests from
> both projects are not much as of now and QA team can accommodate to host
> their interop tests.
>
> Along with that agreement we had few more technical points to consider
> while moving designate and heat interop tests in Tempest repo. All the
> interop tests going to be added in Tempest must to be Tempest like tests.
> Tempest like tests here means tests written using Tempest interfaces and
> guidelines. For example, heat has their tests in heat-tempest-plugin based
> on gabbi and to move heat interop tests to Tempest those have to be written
> as Tempest like test. This is because if we accept non-tempest like tests
> in Tempest then, it will be too difficult to maintain by Tempest team.
>
> Projects (designate and heat) and QA team will work closely to move
> interop tests to Tempest repo which might needs some extra work of
> standardizing their tests and interface used by them like service clients
> etc.
>

Though I've not been part of any of these discussions, this seems to be
exactly opposite to what I've been made to understand by the team i.e. Heat
is not rewriting the gabbi api tests used by Trademark program, but would
create a new tempest plugin (new repo
'orchestration-trademark-tempest-plugin') to host the heat related tests
that are currently candidates for Trademark program?

>
> In future, if there are more new interop adds-on program proposal then, we
> need to analyse the situation again regarding QA team bandwidth. TC or QA
> or interop team needs to raise the resource requirement to Board of
> Directors before any more new adds-on program is being proposed. If QA team
> has less resource and less review bandwitdh then we cannot accept the more
> interop programs till QA get more resource to maintain new interop tests.
>
> Overall Summary:
> - QA team agreed to host the interop tests for heat and designate in
> Tempest repo.
> - Existing TC resolution needs to be adjust about the QA team resource
> bandwidth requirement. If there is going to be more adds-on program
> proposal then, QA team will not accept the new interop tests if QA team
> bandwidth issue still exist that time also.
> - Tempest will document the clear process about interop tests addition and
> other more care items etc.
> - Projects team to make their tests and interface as Tempest like tests
> and stable interfaces standards. Tempest team will closely work and help
> Designate and Heat on this.
>
> Action Items:
> - mugsie to abandon https://review.openstack.org/#/c/521602 with quick
> summary of discussion here at PTG
> - markvoelker to write up clarification to InteropWG process stating that
> tests should be moved into Tempest before being proposed to the BoD
> - markvoelker to work with gmann before next InteropWG+BoD discussion to
> frame up a note about resourcing testing for add-on/vertical programs
> - dhellmann to adjust the TC resolution for resource requirement in QA
> when new adds-on program is being proposed
> - project teams to convert  interop test and  framework as per tempest
> like tests and propose to add to tempest repo.
> - gmann to define process in QA about interop tests addition and
> maintainance
>
> We have added this as one of the monitoring/helping item for QA to make
> sure it is done without delay.  Let's work together to finish this
> activity.
>
> Discussion Details: https://etherpad.openstack.org/p/qa-rocky-ptg-Interop-
> test-for-adds-on-project
>
> ..1 http://lists.openstack.org/pipermail/openstack-dev/2018-
> January/126146.html
>
> -gmann
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] [PTG] [Interop] [Designate] [Heat] [TC]: QA PTG Summary- Interop test for adds-on project

2018-03-07 Thread Ghanshyam Mann
 Hi All,

QA had discussion in Dublin PTG about interop adds-on tests location. First
of all thanks all (specially markvoelker, dhellmann, mugsie) for joining
the sessions. and I am glad we conclude the things and agreed on solution.

Discussion was carry forward from the ML discussion [1] and to get the
agreement about interop adds-on program tests location.

Till now only 2 projects (heat and designate) are in list of adds-on
program from interop side. After discussion and points from all stack
holders, QA team agreed to host these 2 projects interop tests.  Tests from
both projects are not much as of now and QA team can accommodate to host
their interop tests.

Along with that agreement we had few more technical points to consider
while moving designate and heat interop tests in Tempest repo. All the
interop tests going to be added in Tempest must to be Tempest like tests.
Tempest like tests here means tests written using Tempest interfaces and
guidelines. For example, heat has their tests in heat-tempest-plugin based
on gabbi and to move heat interop tests to Tempest those have to be written
as Tempest like test. This is because if we accept non-tempest like tests
in Tempest then, it will be too difficult to maintain by Tempest team.

Projects (designate and heat) and QA team will work closely to move interop
tests to Tempest repo which might needs some extra work of standardizing
their tests and interface used by them like service clients etc.

In future, if there are more new interop adds-on program proposal then, we
need to analyse the situation again regarding QA team bandwidth. TC or QA
or interop team needs to raise the resource requirement to Board of
Directors before any more new adds-on program is being proposed. If QA team
has less resource and less review bandwitdh then we cannot accept the more
interop programs till QA get more resource to maintain new interop tests.

Overall Summary:
- QA team agreed to host the interop tests for heat and designate in
Tempest repo.
- Existing TC resolution needs to be adjust about the QA team resource
bandwidth requirement. If there is going to be more adds-on program
proposal then, QA team will not accept the new interop tests if QA team
bandwidth issue still exist that time also.
- Tempest will document the clear process about interop tests addition and
other more care items etc.
- Projects team to make their tests and interface as Tempest like tests and
stable interfaces standards. Tempest team will closely work and help
Designate and Heat on this.

Action Items:
- mugsie to abandon https://review.openstack.org/#/c/521602 with quick
summary of discussion here at PTG
- markvoelker to write up clarification to InteropWG process stating that
tests should be moved into Tempest before being proposed to the BoD
- markvoelker to work with gmann before next InteropWG+BoD discussion to
frame up a note about resourcing testing for add-on/vertical programs
- dhellmann to adjust the TC resolution for resource requirement in QA when
new adds-on program is being proposed
- project teams to convert  interop test and  framework as per tempest like
tests and propose to add to tempest repo.
- gmann to define process in QA about interop tests addition and
maintainance

We have added this as one of the monitoring/helping item for QA to make
sure it is done without delay.  Let's work together to finish this
activity.

Discussion Details:
https://etherpad.openstack.org/p/qa-rocky-ptg-Interop-test-for-adds-on-project

..1
http://lists.openstack.org/pipermail/openstack-dev/2018-January/126146.html


-gmann
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] PTG Summary

2018-03-07 Thread Dougal Matthews
On 7 March 2018 at 09:28, Dougal Matthews  wrote:

> Hey Mistralites (maybe?),
>
> I have been through the etherpad from the PTG and attempted to expand on
> the topics with details that I remember. If I have missed anything or you
> have any questions, please get in touch. I want to update it while the
> memory is as fresh as possible.
>
> For each main topic I have added a "champion" and a "goal". These are not
> all complete yet and can be adjusted. I did add names next to champion for
> people that discussed that topic at the PTG. The goal should summarise what
> we need to do.
>
> Note: "Champion" does not mean you need to do all the work - just you are
> leading that effort and helping rally people around the issue. Essentially
> it is a collaboration role, but you can still lead the implementation if
> that makes sense. For example, I put myself as the Documentation champion.
> I do not plan on writing all the documentation, rather I want to setup
> better foundations and a better process for writing documentation. This
> will likely be a team effort I need to coordinate.
>
> Etherpad:
> https://etherpad.openstack.org/p/mistral-ptg-rocky
>

I forgot to add, if you were unable to attend the PTG or have anything else
you want to add/discuss then please let us know.


>
>
> Thanks everyone for coming, I think it was a useful week. It was
> unfortunate that the "Beast from the East" (the weather, not Renat!)
> stopped things a bit early on Thursday. I hope all your homeward travels
> worked out in the end.
>
> Cheers,
> Dougal
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 18-10

2018-03-07 Thread Chris Dent


HTML: https://anticdent.org/tc-report-18-10.html

This is a TC Report, but since everything that happened in its window
of observation is preparing for the
[PTG](https://www.openstack.org/ptg), being at the PTG, trying to get
home from the PTG, and recovering from the PTG, perhaps think of this
as "What the TC talked about [at] the PTG". As it is impossible to be
everywhere at once (especially when the board meeting overlaps with
other responsibilities) this will miss a lot of important stuff.  I
hope there are other summaries.

As you may be aware, it [snowed in
Dublin](https://twitter.com/search?q=%23snowpenstack) causing plenty
of disruption to the
[PTG](https://twitter.com/search?q=%23openstackptg) but everyone
(foundation staff, venue staff, hotel staff, attendees, uisce beatha)
worked together to make a good week.

# Talking about the PTG at the PTG

At the [board
meeting](http://lists.openstack.org/pipermail/foundation/2018-March/002570.html),
the future of the PTG was a big topic. As currently constituted it
presents some challenges:

* It is difficult for some people to attend because of visa and other
  travel related issues.
* It is expensive to run and not everyone is convinced of the return
  on investment.
* Some people don't like it (they either miss the old way of doing the
  design summit, or midcycles, or $OTHER).
* Plenty of other reasons that I'm probably not aware of.

This same topic was reviewed at [yesterday's office
hours](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T09:19:32).

For now, the next 2018 PTG is going to happen (destination unknown) but
plans for 2019 are still being discussed.

If you have opinions about the PTG, there will be an opportunity to
express them in a forthcoming survey. Beyond that, however, it is
important [that management at contributing
companies](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T22:29:24)
hear from more people (notably their employees) than the foundation
about the value of the PTG.

My own position is that of the three different styles of in-person
events for technical contributors to OpenStack that I've experienced
(design summit, mid-cycles, PTG), the PTG is the best yet. It minimizes
distractions from other obligations (customer meetings, presentations,
marketing requirements) while maximizing cross-project interaction.

One idea, discussed
[yesterday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T22:02:24)
and [earlier
today](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-07.log.html#t2018-03-07T05:07:20)
was to have the PTG be open to technical participants of any sort, not
just so-called "OpenStack developers". Make it more of a place for
people who hack on and with OpenStack to hack and talk. Leave the
summit (without a forum) for presentations, marketing, pre-sales, etc.

An issue raised with conflating the PTG and the Forum is that it would
remove the
[inward/outward 
focus](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-07.log.html#t2018-03-07T08:20:17)
concept that is supposed to distinguish the two events.

I guess it depends on how we define "we" but I've always assumed that
both events were for outward focus and that for any inward focussing
effort we ought to be able use asynchronous tools more.

# Foundation and OCI

Thierry mentioned
[yesterday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T09:08:04)
that it is likely that the OpenStack Foundation will join the [Open
Container Initiative](https://www.opencontainers.org/) because of
[Kata](https://katacontainers.io/) and
[LOCI](https://governance.openstack.org/tc/reference/projects/loci.html).

This segued into some brief concerns about the [attentions and
intentions of the
Foundation](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-06.log.html#t2018-03-06T09:13:34),
aggravated by the board meeting schedule conflict (there's agreement
that will never ever happen again), and the rumor milling about the
PTG.

# Friday at the PTG with the TC

The TC had scheduled a half day of discussion for Friday at the PTG. A
big [agenda](https://etherpad.openstack.org/p/PTG-Dublin-TC-topics), a
fun filled week, and the snow meant we went nearly all day (and since
there's no place to go, let's talk, let's talk, let's talk) with some
reasonable progress. Some highlights:

* There was some discussion on trying to move forward with
  constellations concept, but I don't recall specific outcomes from
  that discussion.

* The team diversity tags need to be updated to reflect adjustments in
  the very high bars we set earlier in the history of OpenStack. We
  agreed to not remove projects from the tc-approved tag, as that
  could be taken the wrong way. Instead we'll create 

[openstack-dev] [nova] Notification update week 10 (PTG)

2018-03-07 Thread Balázs Gibizer

Hi,

Here is the status update / focus settings mail for w10. We discussed 
couple of new notification related changes during the PTG. I tried to 
mention all of them below but if I missed something then please extend 
my list.


Bugs


[High] https://bugs.launchpad.net/nova/+bug/1737201 TypeError when
sending notification during attach_interface
Fix merged. The backport for ocata is still open: 
https://review.openstack.org/#/c/531746/


[High] https://bugs.launchpad.net/nova/+bug/1739325 Server operations
fail to complete with versioned notifications if payload contains unset
non-nullable fields
No progress. We still need to understand how this problem happens to 
find the proper solution.


[Low] https://bugs.launchpad.net/nova/+bug/1487038
nova.exception._cleanse_dict should use
oslo_utils.strutils._SANITIZE_KEYS
Old abandoned patches exist but need somebody to pick them up:
* https://review.openstack.org/#/c/215308/
* https://review.openstack.org/#/c/388345/

[Whislist] https://bugs.launchpad.net/nova/+bug/1639152 Send out 
notification about server group changes when delete instances
It was discussed in the Rocky PTG and agreed to do this. A new specless 
bp has been created to track the effort 
https://blueprints.launchpad.net/nova/+spec/add-server-group-remove-member-notifications 
The bp is assigned to Takashi



Versioned notification transformation
-
We already have some patches proposed to the rock bp. I will go and 
review them this week.

https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-rocky+status:open


Introduce instance.lock and instance.unlock notifications
-
The bp
https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances
is approved. Waiting for the implementation to be proposed.


Add the user id and project id of the user initiated the instance
action to the notification
-
The bp
https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications
is approved. Implementation patch exists but still needs work 
https://review.openstack.org/#/c/526251/



Add request_id to the InstanceAction versioned notifications

The bp 
https://blueprints.launchpad.net/nova/+spec/add-request-id-to-instance-action-notifications 
is approved and assigned to Keving_Zheng.



Sending full traceback in versioned notifications
-
On the PTG we discussed the need of sending full tracebacks in error 
notifications. I will go and dig out why we decided not to send the 
full traceback when we created the versioned notifications.



Add versioned notifications for removing a member from a server group
-
The specless bp 
https://blueprints.launchpad.net/nova/+spec/add-server-group-remove-member-notifications 
is proposed and it looks good to me.



Factor out duplicated notification sample
-
https://review.openstack.org/#/q/topic:refactor-notification-samples+status:open
No open patches, but I would like to progress with this through the 
Rocky cycle.



Weekly meeting
--
The next meeting will be held on 13th of Marc on #openstack-meeting-4
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180313T17

Cheers,
gibi




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][CI][QA][HA][Eris][LCOO] Validating HA on upstream

2018-03-07 Thread Adam Spiers

Raoul Scarazzini  wrote:

On 06/03/2018 13:27, Adam Spiers wrote:

Hi Raoul and all,
Sorry for joining this discussion late!

[...]

I do not work on TripleO, but I'm part of the wider OpenStack
sub-communities which focus on HA[0] and more recently,
self-healing[1].  With that hat on, I'd like to suggest that maybe
it's possible to collaborate on this in a manner which is agnostic to
the deployment mechanism.  There is an open spec on this>    
https://review.openstack.org/#/c/443504/
which was mentioned in the Denver PTG session on destructive testing
which you referenced[2].

[...]

   https://www.opnfv.org/community/projects/yardstick

[...]

Currently each sub-community and vendor seems to be reinventing HA
testing by itself to some extent, which is easier to accomplish in the
short-term, but obviously less efficient in the long-term.  It would
be awesome if we could break these silos down and join efforts! :-)


Hi Adam,
First of all thanks for your detailed answer. Then let me be honest
while saying that I didn't know yardstick.


Neither did I until Sydney, despite being involved with OpenStack HA
for many years ;-)  I think this shows that either a) there is room
for improved communication between the OpenStack and OPNFV
communities, or b) I need to take my head out of the sand more often ;-)


I need to start from scratch
here to understand what this project is. In any case, the exact meaning
of this thread is to involve people and have a more comprehensive look
at what's around.
The point here is that, as you can see from the tripleo-ha-utils spec
[1] I've created, the project is meant for TripleO specifically. On one
side this is a significant limitation, but on the other one, due to the
pluggable nature of the project, I think that integrations with other
software like you are proposing is not impossible.


Yep.  I totally sympathise with the tension between the need to get
something working quickly, vs. the need to collaborate with the
community in the most efficient way.


Feel free to add your comments to the review.


The spec looks great to me; I don't really have anything to add, and I
don't feel comfortable voting in a project which I know very little
about.


In the meantime, I'll check yardstick to see which kind of bridge we
can build to avoid reinventing the wheel.


Great, thanks!  I wish I could immediately help with this, but I
haven't had the chance to learn yardstick myself yet.  We should
probably try to recruit someone from OPNFV to provide advice.  I've
cc'd Georg who IIRC was the person who originally told me about
yardstick :-)  He is an NFV expert and is also very interested in
automated testing efforts:

   http://lists.openstack.org/pipermail/openstack-dev/2017-November/124942.html

so he may be able to help with this architectural challenge.

Also you should be aware that work has already started on Eris, the
extreme testing framework proposed in this user story:

   
http://specs.openstack.org/openstack/openstack-user-stories/user-stories/proposed/openstack_extreme_testing.html

and in the spec you already saw:

   https://review.openstack.org/#/c/443504/

You can see ongoing work here:

   https://github.com/LCOO/eris
   
https://openstack-lcoo.atlassian.net/wiki/spaces/LCOO/pages/13393034/Eris+-+Extreme+Testing+Framework+for+OpenStack

It looks like there is a plan to propose a new SIG for this, although
personally I would be very happy to see it adopted by the self-healing
SIG, since this framework is exactly what is needed for testing any
self-healing mechanism.

I'm hoping that Sampath and/or Gautum will chip in here, since I think
they're currently the main drivers for Eris.

I'm beginning to think that maybe we should organise a video
conference call to coordinate efforts between the various interested
parties.  If there is appetite for that, the first question is: who
wants to be involved?  To answer that, I have created an etherpad
where interested people can sign up:

   https://etherpad.openstack.org/p/extreme-testing-contacts

and I've cc'd people who I think would probably be interested.  Does
this sound like a good approach?

Cheers,
Adam

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat] weekly meeting is cancelled

2018-03-07 Thread Rico Lin
Hi team,

As we just get back from #*SnowpenStack*
 (PTG), let's skip
meeting this week.

Here are sessions that we discussed in PTG, so if you would like to add
some input to it, now is the time(try to leave your name, so we might know
who it is). https://etherpad.openstack.org/p/heat-rocky-ptg

-- 
May The Force of OpenStack Be With You,

*Rico Lin*irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] PTG Summary

2018-03-07 Thread Dougal Matthews
Hey Mistralites (maybe?),

I have been through the etherpad from the PTG and attempted to expand on
the topics with details that I remember. If I have missed anything or you
have any questions, please get in touch. I want to update it while the
memory is as fresh as possible.

For each main topic I have added a "champion" and a "goal". These are not
all complete yet and can be adjusted. I did add names next to champion for
people that discussed that topic at the PTG. The goal should summarise what
we need to do.

Note: "Champion" does not mean you need to do all the work - just you are
leading that effort and helping rally people around the issue. Essentially
it is a collaboration role, but you can still lead the implementation if
that makes sense. For example, I put myself as the Documentation champion.
I do not plan on writing all the documentation, rather I want to setup
better foundations and a better process for writing documentation. This
will likely be a team effort I need to coordinate.

Etherpad:
https://etherpad.openstack.org/p/mistral-ptg-rocky

Thanks everyone for coming, I think it was a useful week. It was
unfortunate that the "Beast from the East" (the weather, not Renat!)
stopped things a bit early on Thursday. I hope all your homeward travels
worked out in the end.

Cheers,
Dougal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [watcher] weekly meeting is cancelled

2018-03-07 Thread Чадин Александр
We will not be holding a weekly meeting this time since people experience
some jet lag. Let’s meet on March 14 at 08:00 UTC as usual.

Best Regards,

Alex
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev