Re: [openstack-dev] [charms] Propose Felipe Reyes for OpenStack Charmers team

2018-09-11 Thread Chris MacNaughton
+1 Felipe has been a solid contributor to the Openstack Charms for some 
time now.


Chris


On 11-09-18 23:07, Ryan Beisner wrote:

+1  I'm always happy to see Felipe's contributions and fixes come through.

Cheers!

Ryan




On Tue, Sep 11, 2018 at 1:10 PM James Page > wrote:


+1

On Wed, 5 Sep 2018 at 15:48 Billy Olsen mailto:billy.ol...@gmail.com>> wrote:

Hi,

I'd like to propose Felipe Reyes to join the OpenStack
Charmers team as
a core member. Over the past couple of years Felipe has
contributed
numerous patches and reviews to the OpenStack charms [0]. His
experience
and knowledge of the charms used in OpenStack and the usage of
Juju make
him a great candidate.

[0] -

https://review.openstack.org/#/q/owner:%22Felipe+Reyes+%253Cfelipe.reyes%2540canonical.com%253E%22

Thanks,

Billy Olsen


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [senlin] Nominations to Senlin Core Team

2018-09-11 Thread Qiming Teng
+2 to both changes.

- Qiming


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][PTG] QA Dinner Night

2018-09-11 Thread Ghanshyam Mann
Hi All,

We have finalized the place and time for QA dinner which is tomorrow night. 

Here are the details:

Restaurant :  Famous Dave's  - https://goo.gl/maps/G7gjpsJUEV72 
Wednesday night, 6:30 PM
Meeting time at lobby: 6.15 PM

-gmann


  On Mon, 10 Sep 2018 20:13:15 +0900 Ghanshyam Mann 
 wrote  
 >  
 >  
 >  
 >   On Mon, 10 Sep 2018 19:35:58 +0900 Andreas Jaeger  
 > wrote   
 >  > On 10/09/2018 12.00, Ghanshyam Mann wrote:  
 >  > > Hi All,  
 >  > >   
 >  > > I'd like to propose a QA Dinner night for the QA team at the DENVER 
 > PTG. I initiated a doodle vote [1] to choose Tuesday or Wednesday night.  
 >  >   
 >  > Dublin or Denver? Hope you're not time traveling or went to wrong   
 >  > location ;)  
 >  >   
 >  
 > heh, thanks for correction. Yes it is Denver :).  
 >  
 >  
 >  > Andreas  
 >  >   
 >  > > NOTE: Anyone engaged in QA activities (not necessary to be QA core)  
 > are welcome to join.  
 >  > >   
 >  > >   
 >  > > [1] https://doodle.com/poll/68fudz937v22ghnv  
 >  > >   
 >  > > -gmann  
 >  > >   
 >  > >   
 >  > >   
 >  > >   
 >  > >   
 >  > > 
 > __  
 >  > > OpenStack Development Mailing List (not for usage questions)  
 >  > > Unsubscribe: 
 > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
 >  > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
 >  > >   
 >  >   
 >  >   
 >  > --   
 >  >   Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi  
 >  >SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany  
 >  > GF: Felix Imendörffer, Jane Smithard, Graham Norton,  
 >  > HRB 21284 (AG Nürnberg)  
 >  >  GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126 
 >  
 >  >   
 >  >  
 >  
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Posibilities to aggregate/merge configs across templates

2018-09-11 Thread Jiri Tomasek
Hi,

The problems you're describing are close to the discussion we had with
Mathieu Bultel here [1]. Currently to set some parameters values as
ultimate source of truth, you need to put them in plan-environment.yaml.
Ignoring the fact that CLI now merges environments itself (fixed by [2] and
not affecting this behaviour), the Mistral workflows pass the environments
to heat in order in which they are provided with -e option and then as last
environment it applies parameter_defaults from plan-environment.yaml.
The result of [1] effort is going to be that the way deployment
configuration (roles setting, networks selection, environments selection
and explicit parameters setting) is going to be done the same by both CLI
and GUI through Mistral Workflows which already exist but are used only by
GUI. When you look at plan-environment.yaml in Swift, you can see the list
of environment files in order in which they're merged as well as parameters
which are going to override the values in environments in case of collision.

Merging strategy for parameters is an interesting problem, configuring this
in t-h-t looks like a good solution to me. Note that the GUI always
displays the parameter values which it is getting from GetParameters
Mistral action. This action gets the parameter values from Heat by running
heat validate. This means that it always displays the real parameter values
which are actually going to be applied by Heat as a result of all the
merging. If user updates that value with GUI it will end up being set in
plan-environment.yaml.

-- Jirka




[1]
http://lists.openstack.org/pipermail/openstack-dev/2018-September/134511.html
[2] https://review.openstack.org/#/c/448209/


On Tue, Sep 4, 2018 at 9:54 AM Kamil Sambor  wrote:

> Hi all,
>
> I want to start discussion on: how to solve issue with merging environment
> values in TripleO.
>
> Description:
> In TripleO we experience some issues related to setting parameters in heat
> templates. First, it isn't possible to set some params as ultimate source
> of truth (disallow to overwrite param in other heat templates). Second it
> isn't possible to merge values from different templates [0][1].
> Both features are implemented in heat and can be easly used in
> templates.[2][3]
> This doesn't work in TripleO because we overwrite all values in template in
> python client instead of aggregating them etc. orsimply let heat do the
> job .[4][5]
>
> Solution:
> Example solutions are: we can fix how python tripleo client works with env
> and templates and enable heat features or we can write some puppet code
> that will work similar to firewall code [6] and will support aggregate and
> merge values that we point out. Both solutions have pros and cons but IMHO
> solution which give heat to do job is preferable. But solution with merging
> give us possibilities to have full control on merging of environments.
>
> Problems:
> Only few as a start: With both solutions we will have the same problem,
> porting new patches which will use this functionalities to older version of
> rhel. Also upgrades can be really problematic to new version. Also changes
> which will enable heat feature will totally change how templates work and
> we
> will need to change all templates and change default behavior (which is
> merge
> params) to override behavior and also add posibilities to run temporaly old
> behavior.
>
> On the end, I prepared two patchsets with two PoC in progress. First one
> with
> merging env in tripleo client but with using heat merging functionality:
> https://review.openstack.org/#/c/599322/ . And second where we ignore
> merget
> env and move all files and add them into deployment plan enviroments.
> https://review.openstack.org/#/c/599559/
>
> What do you think about each of solution?Which solution should be used
> in TripleO?
>
> Best,
> Kamil Sambor
>
> [0] https://bugs.launchpad.net/tripleo/+bug/1716391
> [1] https://bugs.launchpad.net/heat/+bug/1635409
> [2]
> https://docs.openstack.org/heat/pike/template_guide/environment.html#restrict-update-or-replace-of-a-given-resource
> [3]
> https://docs.openstack.org/heat/pike/template_guide/environment.html#environment-merging
> [4]
> https://github.com/openstack/python-tripleoclient/blob/master/tripleoclient/utils.py#L1019
> [5]
> https://github.com/openstack/python-heatclient/blob/f73c2a4177377b710a02577feea38560b00a24bf/heatclient/common/template_utils.py#L191
> [6]
> https://github.com/openstack/puppet-tripleo/tree/master/manifests/firewall
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [upgrade] request for pre-upgrade check for db purge

2018-09-11 Thread Matt Riedemann

On 9/11/2018 9:01 AM, Dan Smith wrote:

I dunno, adding something to nova.conf that is only used for nova-status
like that seems kinda weird to me. It's just a warning/informational
sort of thing so it just doesn't seem worth the complication to me.


It doesn't seem complicated to me, I'm not sure why the config is weird, 
but maybe just because it's config-driven CLI behavior?




Moving it to an age thing set at one year seems okay, and better than
making the absolute limit more configurable.

Any reason why this wouldn't just be a command line flag to status if
people want it to behave in a specific way from a specific tool?


I always think of the pre-upgrade checks as release-specific and we 
could drop the old ones at some point, so that's why I wasn't thinking 
about adding check-specific options to the command - but since we also 
say it's OK to run "nova-status upgrade check" to verify a green 
install, it's probably good to leave the old checks in place, i.e. 
you're likely always going to want those cells v2 and placement checks 
we added in ocata even long after ocata EOL.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][glance] Heat image resource support issue

2018-09-11 Thread Rico Lin
Thanks Abhishek

I already add that to Glance PTG etherpad. Since we got schedule conflict
so just let me know if we should be there as well, otherwise hope you guys
can help to resolve that issue. Thx!

btw, if you do require us to be there, might better schedule in the
afternoon on Wed. or Thu.

On Thu, Sep 6, 2018 at 4:45 AM Abhishek Kekane  wrote:

> Hi Rico,
>
> Session times are not decided yet, could you please add your topic on [1]
> so that it will be on discussion list.
> Also glance sessions are scheduled from Wednesday to Friday between 9 to 5
> PM, so you can drop by as per your convenience.
>
> [] https://etherpad.openstack.org/p/stein-ptg-glance-planning
>
>
> Thanks & Best Regards,
>
> Abhishek Kekane
>
> On Thu, Sep 6, 2018 at 3:48 PM, Rico Lin 
> wrote:
>
>>
>> On Thu, Sep 6, 2018 at 12:52 PM Abhishek Kekane 
>> wrote:
>>
>>> Hi Rico,
>>>
>>> We will discuss this during PTG, however meantime you can add
>>> WSGI_MODE=mod_wsgi in local.conf for testing purpose.
>>>
>>
>> Cool, If you can let me know which session it's, I will try to be there
>> if no conflict
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


-- 
May The Force of OpenStack Be With You,

*Rico Lin*irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [senlin] Nominations to Senlin Core Team

2018-09-11 Thread Jude Cross
+1 for Erik

On Tue, Sep 11, 2018 at 4:29 AM  wrote:

>
> +1 for both
>
>
> 原始邮件
> *发件人:*DucTruong 
> *收件人:*openstack-dev@lists.openstack.org  >
> *日 期 :*2018年09月11日 01:00
> *主 题 :**[openstack-dev] [senlin] Nominations to Senlin Core Team*
> Hi Senlin Core Team,
>
> I would like to nominate 2 new core reviewers for Senlin:
>
> [1] Jude Cross (jucr...@blizzard.com)
> [2] Erik Olof Gunnar Andersson (eanders...@blizzard.com)
>
> Jude has been doing a number of reviews and contributed some important
> patches to Senlin during the Rocky cycle that resolved locking
> problems.
>
> Erik has the most number of reviews in Rocky and has contributed high
> quality code reviews for some time.
>
> [1]
> http://stackalytics.com/?module=senlin-group=marks=rocky_id=jucr...@blizzard.com
> [2]
> http://stackalytics.com/?module=senlin-group=marks_id=eandersson=rocky
>
> Voting is open for 7 days.  Please reply with your +1 vote in favor or
> -1 as a veto vote.
>
> Regards,
>
> Duc (dtruong)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [charms] Propose Felipe Reyes for OpenStack Charmers team

2018-09-11 Thread Ryan Beisner
+1  I'm always happy to see Felipe's contributions and fixes come through.

Cheers!

Ryan




On Tue, Sep 11, 2018 at 1:10 PM James Page  wrote:

> +1
>
> On Wed, 5 Sep 2018 at 15:48 Billy Olsen  wrote:
>
>> Hi,
>>
>> I'd like to propose Felipe Reyes to join the OpenStack Charmers team as
>> a core member. Over the past couple of years Felipe has contributed
>> numerous patches and reviews to the OpenStack charms [0]. His experience
>> and knowledge of the charms used in OpenStack and the usage of Juju make
>> him a great candidate.
>>
>> [0] -
>>
>> https://review.openstack.org/#/q/owner:%22Felipe+Reyes+%253Cfelipe.reyes%2540canonical.com%253E%22
>>
>> Thanks,
>>
>> Billy Olsen
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [goals][python3][charms] starting zuul migration

2018-09-11 Thread Doug Hellmann
Here are the patches for the zuul migration for the OpenStack Charms
project.

+---+---+---+
| Subject   | Repo  
| Branch|
+---+---+---+
| remove job settings for OpenStack Charms repositories | 
openstack-infra/project-config| master|
| import zuul job settings from project-config  | openstack/charm-aodh  
| stable/18.08  |
| import zuul job settings from project-config  | openstack/charm-aodh  
| master|
| import zuul job settings from project-config  | 
openstack/charm-barbican  | stable/18.08  |
| import zuul job settings from project-config  | 
openstack/charm-barbican  | master|
| import zuul job settings from project-config  | 
openstack/charm-barbican-softhsm  | stable/18.08  |
| import zuul job settings from project-config  | 
openstack/charm-barbican-softhsm  | master|
| import zuul job settings from project-config  | 
openstack/charm-ceilometer| stable/18.08  |
| import zuul job settings from project-config  | 
openstack/charm-ceilometer| master|
| import zuul job settings from project-config  | 
openstack/charm-ceilometer-agent  | stable/18.08  |
| import zuul job settings from project-config  | 
openstack/charm-ceilometer-agent  | master|
| import zuul job settings from project-config  | openstack/charm-ceph  
| master|
| import zuul job settings from project-config  | 
openstack/charm-ceph-fs   | stable/18.08  |
| import zuul job settings from project-config  | 
openstack/charm-ceph-fs   | master|
| import zuul job settings from project-config  | 
openstack/charm-ceph-mon  | stable/18.08  |
| import zuul job settings from project-config  | 
openstack/charm-ceph-mon  | master|
| import zuul job settings from project-config  | 
openstack/charm-ceph-osd  | stable/18.08  |
| import zuul job settings from project-config  | 
openstack/charm-ceph-osd  | master|
| import zuul job settings from project-config  | 
openstack/charm-ceph-proxy| stable/18.08  |
| import zuul job settings from project-config  | 
openstack/charm-ceph-proxy| master|
| import zuul job settings from project-config  | 
openstack/charm-ceph-radosgw  | stable/18.08  |
| import zuul job settings from project-config  | 
openstack/charm-ceph-radosgw  | master|
| import zuul job settings from project-config  | 
openstack/charm-cinder| stable/18.08  |
| import zuul job settings from project-config  | 
openstack/charm-cinder| master|
| import zuul job settings from project-config  | 
openstack/charm-cinder-backup | stable/18.08  |
| import zuul job settings from project-config  | 
openstack/charm-cinder-backup | master|
| import zuul job settings from project-config  | 
openstack/charm-cinder-ceph   | stable/18.08  |
| import zuul job settings from project-config  | 
openstack/charm-cinder-ceph   | master|
| import zuul job settings from project-config  | 
openstack/charm-cloudkitty| master|
| import zuul job settings from project-config  | 
openstack/charm-deployment-guide  | master|
| import zuul job settings from project-config  | 
openstack/charm-deployment-guide  | stable/pike   |
| import zuul job settings from project-config  | 
openstack/charm-deployment-guide  | stable/queens |
| import zuul job settings from project-config  | 
openstack/charm-deployment-guide  | stable/rocky  |
| import zuul job settings from project-config  | 
openstack/charm-designate | stable/18.08  |
| import zuul job settings from project-config  | 
openstack/charm-designate | master|
| import zuul job settings from project-config  | 
openstack/charm-designate-bind   

[openstack-dev] [nova] 2018 User Survey results

2018-09-11 Thread melanie witt

Hey all,

The foundation sent me a copy of 2018 user survey responses to the 
following question about Nova:


"How important is it to be able to customize Nova in your deployment, 
e.g. classload your own managers/drivers, use hooks, plug in API 
extensions, etc?"


Note: this question populates for any user who indicates they are in 
production or testing with the Nova project. It is not a required 
question, so these responses do not necessarily include every user.


There were a total of 373 responses.

The number of responses per multiple choice answer were:

- "Not important; I use pretty much stock Nova with maybe some small 
patches or bug fixes that aren't upstream.": 173 (46.4%)


- "Somewhat important; I have some custom scheduler filters and other 
small patches but nothing major.": 144 (38.6%)


- "Very important; my Nova deployment is heavily customized and 
hooks/plugins/custom APIs are a major part of my operation.": 56 (15.0%)


And I made a google sheets chart out of the responses which you can view 
here:


https://docs.google.com/spreadsheets/d/e/2PACX-1vSFG4ev8VsMMsYXgQHC7Y24WXfdSp6YdwiGX3MGvCsYZ50qG8Po-2i7vOCppJEq8051skxzvb42GIUV/pubhtml?gid=584107382=true

Cheers,
-melanie

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg][cinder][placement] etherpad for this afternoon's meeting

2018-09-11 Thread Gorka Eguileor
On 11/09, Jay Pipes wrote:
> Hi Jay, where is this discussion taking place?
>

Hi,

It was on another email:

  Big Thompson Room on Tuesday from 15:15 to 17:00

Cheers,
Gorka.

> On Tue, Sep 11, 2018, 11:10 AM Jay S Bryant  wrote:
>
> > All,
> >
> > I have created an etherpad to take notes during our meeting this
> > afternoon:
> > https://etherpad.openstack.org/p/cinder-placement-denver-ptg-2018
> >
> > If you have information you want to get in there before the meeting I
> > would appreciate you pre-populating the pad.
> >
> > Jay
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg][cinder][placement] etherpad for this afternoon's meeting

2018-09-11 Thread Jay Pipes
Hi Jay, where is this discussion taking place?

On Tue, Sep 11, 2018, 11:10 AM Jay S Bryant  wrote:

> All,
>
> I have created an etherpad to take notes during our meeting this
> afternoon:
> https://etherpad.openstack.org/p/cinder-placement-denver-ptg-2018
>
> If you have information you want to get in there before the meeting I
> would appreciate you pre-populating the pad.
>
> Jay
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [charms] Propose Felipe Reyes for OpenStack Charmers team

2018-09-11 Thread James Page
+1

On Wed, 5 Sep 2018 at 15:48 Billy Olsen  wrote:

> Hi,
>
> I'd like to propose Felipe Reyes to join the OpenStack Charmers team as
> a core member. Over the past couple of years Felipe has contributed
> numerous patches and reviews to the OpenStack charms [0]. His experience
> and knowledge of the charms used in OpenStack and the usage of Juju make
> him a great candidate.
>
> [0] -
>
> https://review.openstack.org/#/q/owner:%22Felipe+Reyes+%253Cfelipe.reyes%2540canonical.com%253E%22
>
> Thanks,
>
> Billy Olsen
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][ptg] Topics scheduled for next week ...

2018-09-11 Thread Gorka Eguileor
On 07/09, Jay S Bryant wrote:
> Team,
>
> I have created an etherpad for each of the days of the PTG and split out the
> proposed topics from the planning etherpad into the individual days for
> discussion: [1] [2] [3]
>
> If you want to add an additional topic please add it to Friday or find some
> time on one of the other days.
>
> I look forward to discussing all these topics with you all next week.
>
> Thanks!
>
> Jay

Thanks Jay.

I have added to the Cinder general etherpad the shared_target discussion
topic, as I believe we should be discussing it in the Cinder room first
before Thursday's meeting with Nova.

I saw that on Wednesday the 2:30 to 3:00 privsep topic is a duplicate of
the 12:00 to 12:30 slot, so I have taken the liberty of replacing it
with the shared_targets one.  I hope that's alright.

Cheers,
Gorka.

>
> [1] https://etherpad.openstack.org/p/cinder-ptg-stein-wednesday
>
> [2] https://etherpad.openstack.org/p/cinder-ptg-stein-thursday
>
> [3] https://etherpad.openstack.org/p/cinder-ptg-stein-friday
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Storyboard] PTG Planning & Upcoming Meeting Cancelled

2018-09-11 Thread Kendall Nelson
Update!

We will be in Vail this afternoon. Lunch ends at 1:30 so we hope to be
starting conversations by 1:45.

-Kendall (diablo_rojo)

On Fri, Sep 7, 2018 at 2:07 PM Kendall Nelson  wrote:

> Hello!
>
> With the PTG in just a few days, I wanted to give some info and updates so
> that you are prepared.
>
> 1. This coming week's regular meeting on Wednesday will be cancelled.
>
> 2. I am planning on booking Blanca Peak for the whole afternoon on Tuesday
> for discussions. Just waiting for this patch to merge[0]. If we need more
> time we can schedule something later in the week. See you there!
>
> 3. Here [1] is the etherpad that we've been collecting discussion topics
> into. If there is anything you want to add, feel free.
>
> -Kendall (diablo_rojo)
>
> [0] https://review.openstack.org/#/c/600665/
> [1]https://etherpad.openstack.org/p/sb-stein-ptg-planning
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][python3][nova] starting zuul migration for nova repos

2018-09-11 Thread Stephen Finucane
On Mon, 2018-09-10 at 13:48 -0600, Doug Hellmann wrote:
> Melanie gave me the go-ahead to propose the patches, so here's the list
> of patches for the zuul migration, doc job update, and python 3.6 unit
> tests for the nova repositories.

I've reviewed/+2d all of these on master and think Sylvain will be
following up with the +Ws. I need someone else to handle the
'stable/XXX' patches though.

Here's a query for anyone that wants to jump in here.

https://review.openstack.org/#/q/topic:python3-first+status:open+(openstack/nova+OR+project:openstack/nova-specs+OR+openstack/os-traits+OR+openstack/os-vif+OR+openstack/osc-placement+OR+openstack/python-novaclient)

Stephen

PS: Thanks, Andreas, for the follow-up cleanup patches. Much
appreciated :)

> +--++---+
> > Subject  | Repo 
> >   | Branch|
> 
> +--++---+
> > remove job settings for nova repositories| 
> > openstack-infra/project-config | master|
> > import zuul job settings from project-config | openstack/nova   
> >   | master|
> > switch documentation job to new PTI  | openstack/nova   
> >   | master|
> > add python 3.6 unit test job | openstack/nova   
> >   | master|
> > import zuul job settings from project-config | openstack/nova   
> >   | stable/ocata  |
> > import zuul job settings from project-config | openstack/nova   
> >   | stable/pike   |
> > import zuul job settings from project-config | openstack/nova   
> >   | stable/queens |
> > import zuul job settings from project-config | openstack/nova   
> >   | stable/rocky  |
> > import zuul job settings from project-config | openstack/nova-specs 
> >   | master|
> > import zuul job settings from project-config | openstack/os-traits  
> >   | master|
> > switch documentation job to new PTI  | openstack/os-traits  
> >   | master|
> > add python 3.6 unit test job | openstack/os-traits  
> >   | master|
> > import zuul job settings from project-config | openstack/os-traits  
> >   | stable/pike   |
> > import zuul job settings from project-config | openstack/os-traits  
> >   | stable/queens |
> > import zuul job settings from project-config | openstack/os-traits  
> >   | stable/rocky  |
> > import zuul job settings from project-config | openstack/os-vif 
> >   | master|
> > switch documentation job to new PTI  | openstack/os-vif 
> >   | master|
> > add python 3.6 unit test job | openstack/os-vif 
> >   | master|
> > import zuul job settings from project-config | openstack/os-vif 
> >   | stable/ocata  |
> > import zuul job settings from project-config | openstack/os-vif 
> >   | stable/pike   |
> > import zuul job settings from project-config | openstack/os-vif 
> >   | stable/queens |
> > import zuul job settings from project-config | openstack/os-vif 
> >   | stable/rocky  |
> > import zuul job settings from project-config | openstack/osc-placement  
> >   | master|
> > switch documentation job to new PTI  | openstack/osc-placement  
> >   | master|
> > add python 3.6 unit test job | openstack/osc-placement  
> >   | master|
> > import zuul job settings from project-config | openstack/osc-placement  
> >   | stable/queens |
> > import zuul job settings from project-config | openstack/osc-placement  
> >   | stable/rocky  |
> > import zuul job settings from project-config | openstack/python-novaclient  
> >   | master|
> > switch documentation job to new PTI  | openstack/python-novaclient  
> >   | master|
> > add python 3.6 unit test job | openstack/python-novaclient  
> >   | master|
> > add lib-forward-testing-python3 test job | openstack/python-novaclient  
> >   | master|
> > import zuul job settings from project-config | openstack/python-novaclient  
> >   | stable/ocata  |
> > import zuul job settings from project-config | openstack/python-novaclient  
> >   | stable/pike   |
> > import zuul job settings from project-config | openstack/python-novaclient  
> >   | stable/queens |
> > import zuul job settings from project-config | openstack/python-novaclient  
> >   | stable/rocky  |
> 
> +--++---+
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

[openstack-dev] [ptg][cinder][placement] etherpad for this afternoon's meeting

2018-09-11 Thread Jay S Bryant

All,

I have created an etherpad to take notes during our meeting this 
afternoon: https://etherpad.openstack.org/p/cinder-placement-denver-ptg-2018


If you have information you want to get in there before the meeting I 
would appreciate you pre-populating the pad.


Jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible][kolla-ansible][tripleo] ansible roles: where they live and what do they do

2018-09-11 Thread Alex Schultz
Thanks everyone for coming and chatting.  From the meeting we've had a
few items where we can collaborate together.

Here are some specific bullet points:

- TripleO folks should feel free to propose some minor structural
changes if they make the integration easier.  TripleO is currently
investigating what it would look like to pull the keystone ansible
parts out of tripleo-heat-templates and put it into
ansible-role-tripleo-keystone.  It would be beneficial to use this
role as an example for how the os_keystone role can be consumed.
- The openstack-ansible-tests has some good examples of ansible-lint
rules that can be used to improve quality
- Tags could be used to limit the scope of OpenStack Ansible roles,
but it sounds like including tasks would be a better pattern.
- Need to establish a pattern for disabling packaging/service
configurations globally in OpenStack Ansible roles.
- Shared roles are open for reuse/replacement if something better is
available (upstream/elsewhere).

If anyone has any others, feel free to comment.

Thanks,
-Alex

On Mon, Sep 10, 2018 at 10:58 AM, Alex Schultz  wrote:
> I just realized I booked the room and put it in the etherpad but
> forgot to email out the time.
>
> Time: Tuesday 09:00-10:45
> Room: Big Thompson
>
> https://etherpad.openstack.org/p/ansible-collaboration-denver-ptg
>
> Thanks,
> -Alex
>
> On Tue, Sep 4, 2018 at 1:03 PM, Alex Schultz  wrote:
>> On Thu, Aug 9, 2018 at 2:43 PM, Mohammed Naser  wrote:
>>> Hi Alex,
>>>
>>> I am very much in favour of what you're bringing up.  We do have
>>> multiple projects that leverage Ansible in different ways and we all
>>> end up doing the same thing at the end.  The duplication of work is
>>> not really beneficial for us as it takes away from our use-cases.
>>>
>>> I believe that there is a certain number of steps that we all share
>>> regardless of how we deploy, some of the things that come up to me
>>> right away are:
>>>
>>> - Configuring infrastructure services (i.e.: create vhosts for service
>>> in rabbitmq, create databases for services, configure users for
>>> rabbitmq, db, etc)
>>> - Configuring inter-OpenStack services (i.e. keystone_authtoken
>>> section, creating endpoints, etc and users for services)
>>> - Configuring actual OpenStack services (i.e.
>>> /etc//.conf file with the ability of extending
>>> options)
>>> - Running CI/integration on a cloud (i.e. common role that literally
>>> gets an admin user, password and auth endpoint and creates all
>>> resources and does CI)
>>>
>>> This would deduplicate a lot of work, and especially the last one, it
>>> might be beneficial for more than Ansible-based projects, I can
>>> imagine Puppet OpenStack leveraging this as well inside Zuul CI
>>> (optionally)... However, I think that this something which we should
>>> discus further for the PTG.  I think that there will be a tiny bit
>>> upfront work as we all standarize but then it's a win for all involved
>>> communities.
>>>
>>> I would like to propose that deployment tools maybe sit down together
>>> at the PTG, all share how we use Ansible to accomplish these tasks and
>>> then perhaps we can work all together on abstracting some of these
>>> concepts together for us to all leverage.
>>>
>>
>> I'm currently trying to get a spot on Tuesday morning to further
>> discuss some of this items.  In the mean time I've started an
>> etherpad[0] to start collecting ideas for things to discuss.  At the
>> moment I've got the tempest role collaboration and some basic ideas
>> for best practice items that we can discuss.  Feel free to add your
>> own and I'll update the etherpad with a time slot when I get one
>> nailed down.
>>
>> Thanks,
>> -Alex
>>
>> [0] https://etherpad.openstack.org/p/ansible-collaboration-denver-ptg
>>
>>> I'll let others chime in as well.
>>>
>>> Regards,
>>> Mohammed
>>>
>>> On Thu, Aug 9, 2018 at 4:31 PM, Alex Schultz  wrote:
 Ahoy folks,

 I think it's time we come up with some basic rules/patterns on where
 code lands when it comes to OpenStack related Ansible roles and as we
 convert/export things. There was a recent proposal to create an
 ansible-role-tempest[0] that would take what we use in
 tripleo-quickstart-extras[1] and separate it for re-usability by
 others.   So it was asked if we could work with the openstack-ansible
 team and leverage the existing openstack-ansible-os_tempest[2].  It
 turns out we have a few more already existing roles laying around as
 well[3][4].

 What I would like to propose is that we as a community come together
 to agree on specific patterns so that we can leverage the same roles
 for some of the core configuration/deployment functionality while
 still allowing for specific project specific customization.  What I've
 noticed between all the project is that we have a few specific core
 pieces of functionality that needs to be handled (or skipped as it may
 be) for each service being 

Re: [openstack-dev] [all] Ongoing spam in Freenode IRC channels

2018-09-11 Thread Jeremy Stanley
On 2018-08-01 08:40:51 -0700 (-0700), James E. Blair wrote:
> Monty Taylor  writes:
> > On 08/01/2018 12:45 AM, Ian Wienand wrote:
> > > I'd suggest to start, people with an interest in a channel can
> > > request +r from an IRC admin in #openstack-infra and we track
> > > it at [2]
> >
> > To mitigate the pain caused by +r - we have created a channel
> > called #openstack-unregistered and have configured the channels
> > with the +r flag to forward people to it.
[...]
> It turns out this was a very popular option, so we've gone ahead
> and performed this for all channels registered with accessbot.
[...]

We rolled this back 5 days ago for all channels and haven't had any
new reports of in-channel spamming yet. Hopefully this means the
recent flood is behind us now but definitely let us know (replying
on this thread or in #openstack-infra on Freenode) if you see any
signs of resurgence.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [upgrade] request for pre-upgrade check for db purge

2018-09-11 Thread Dan Smith
> How do people feel about this? It seems pretty straight-forward to
> me. If people are generally in favor of this, then the question is
> what would be sane defaults - or should we not assume a default and
> force operators to opt into this?

I dunno, adding something to nova.conf that is only used for nova-status
like that seems kinda weird to me. It's just a warning/informational
sort of thing so it just doesn't seem worth the complication to me.

Moving it to an age thing set at one year seems okay, and better than
making the absolute limit more configurable.

Any reason why this wouldn't just be a command line flag to status if
people want it to behave in a specific way from a specific tool?

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cyborg]Day 2 Arrangement Reminder

2018-09-11 Thread Zhipeng Huang
Hi Team,

Today the Cyborg session will concentrate on two items that were not
covered yesterday: neutron-cyborg interaction and general device mgmt.
Since I will be mostly at the Public Cloud WG session, Sundar will help to
lead the discussion, and our Stein PTL Li Liu will host the online ZOOM
conference. You are also welcomed to propose new topics.

Our team photo is schedule 11:30 so let's gather at the lobby front around
11:25 :)

All the information could be found at
https://etherpad.openstack.org/p/cyborg-ptg-stein .

-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] Upcoming meeting 2018-09-11 Tuesday UTC 2100

2018-09-11 Thread Spyros Trigazis
Hello team,

This is a reminder for the upcoming magnum meeting [0].

For convenience you can import this from here [1] or view it in html here
[2].

Cheers,
Spyros

[0]
https://wiki.openstack.org/wiki/Meetings/Containers#Weekly_Magnum_Team_Meeting
[1]
https://calendar.google.com/calendar/ical/dl8ufmpm2ahi084d038o7rgoek%40group.calendar.google.com/public/basic.ics
[2]
https://calendar.google.com/calendar/embed?src=dl8ufmpm2ahi084d038o7rgoek%40group.calendar.google.com=Europe/Zurich
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Plan management refactoring for Life cycle

2018-09-11 Thread mathieu bultel
Hi,

On 09/11/2018 12:08 PM, Bogdan Dobrelya wrote:
> On 9/11/18 4:43 AM, James Slagle wrote:
>> On Mon, Sep 10, 2018 at 10:12 AM Jiri Tomasek 
>> wrote:
>>>
>>> Hi Mathieu,
>>>
>>> Thanks for bringing up the topic. There are several efforts
>>> currently in progress which should lead to solving the problems
>>> you're describing. We are working on introducing CLI commands which
>>> would perform the deployment configuration operations on deployment
>>> plan in Swift. This is a main step to finally reach CLI and GUI
>>> compatibility/interoperability. CLI will perform actions to
>>> configure deployment (roles, networks, environments selection,
>>> parameters setting etc.) by calling Mistral workflows which store
>>> the information in deployment plan in Swift. The result is that all
>>> the information which define the deployment are stored in central
>>> place - deployment plan in Swift and the deploy command is turned
>>> into simple 'openstack overcloud  deploy'. Deployment plan
>>> then has plan-environment.yaml which has the list of environments
>>> used and customized parameter values, roles-data.yaml which carry
>>> roles definition and network-data.yaml which carry networks
>>> definition. The information stored in these files (and deployment
>>> plan in general) can then be treated as source of information about
>>> deployment. The deployment can then be easily exported and reliably
>>> replicated.
>>>
>>> Here is the document which we put together to identify missing
>>> pieces between GUI,CLI and Mistral TripleO API. We'll use this to
>>> discuss the topic at PTG this week and define work needed to be done
>>> to achieve the complete interoperability. [1]
>>>
>>> Also there is a pending patch from Steven Hardy which aims to remove
>>> CLI specific environments merging which should fix the problem with
>>> tracking of the environments used with CLI deployment. [2]
>>>
Thank you Jirka to point me to this work.
I will be happy to help in those efforts, at least for the lice cycle
part (Update/Upgrade/Scale) of this big feature. I can't attend to the
PTG this week unfortunately, but if you can point me out the etherpad
with the resume of the session it would be very nice.

I think the review from Steven aim to solve more or less the same issue
than my current review. I will go through it in details, and AFAIS the
last changes are old.

>>> [1]
>>> https://gist.github.com/jtomasek/8c2ae6118be0823784cdafebd9c0edac
>>> (Apologies for inconvenient format, I'll try to update this to
>>> better/editable format. Original doc:
>>> https://docs.google.com/spreadsheets/d/1ERfx2rnPq6VjkJ62JlA_E6jFuHt9vVl3j95dg6-mZBM/edit?usp=sharing)
>>> [2] https://review.openstack.org/#/c/448209/
>>
>>
>> Related to this work, I'd like to see us store the plan in git instead
>> of swift. I think this would reduce some of the complexity around plan
>> management, and move us closer to a simpler undercloud architecture.
>> It would be nice to see each change to the plan represented as new git
>> commit, so we can even see the changes to the plan as roles, networks,
>> services, etc, are selected.
>>
>> I also think git would provide a familiar experience for both
>> developers and operators who are already accustomed to devops type
>> workflows. I think we could make these changes without it impact the
>> API too much or, hopefully, at all.
>
> +42!
> See also the related RFE (drafted only) [0]
>
> [0] https://bugs.launchpad.net/tripleo/+bug/1782139

Thanks James,
Same here +1 (or 42 :))
>
>>
>
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] about unified limits

2018-09-11 Thread Lance Bragstad
Extra eyes on the API would be appreciated. We're also close to the point
where we can start incorporating oslo.limit into services, so preparing
those changes might be useful, too.

One of the outcomes from yesterday's session was that Jay and Mel (from
nova) were going to work out some examples we could use to finish up the
enforcement code in oslo.limit. Helping out with that or picking it up
would certainly help move the ball forward in nova.




On Tue, Sep 11, 2018 at 1:15 AM Jaze Lee  wrote:

> I recommend li...@unitedstack.com to join in to help to work forward.
> May be first we should the keystone unified limits api really ok or
> something else ?
>
> Lance Bragstad  于2018年9月8日周六 上午2:35写道:
> >
> > That would be great! I can break down the work a little bit to help
> describe where we are at with different parts of the initiative. Hopefully
> it will be useful for your colleagues in case they haven't been closely
> following the effort.
> >
> > # keystone
> >
> > Based on the initial note in this thread, I'm sure you're aware of
> keystone's status with respect to unified limits. But to recap, the initial
> implementation landed in Queens and targeted flat enforcement [0]. During
> the Rocky PTG we sat down with other services and a few operators to
> explain the current status in keystone and if either developers or
> operators had feedback on the API specifically. Notes were captured in
> etherpad [1]. We spent the Rocky cycle fixing usability issues with the API
> [2] and implementing support for a hierarchical enforcement model [3].
> >
> > At this point keystone is ready for services to start consuming the
> unified limits work. The unified limits API is still marked as stable and
> it will likely stay that way until we have at least one project using
> unified limits. We can use that as an opportunity to do a final flush of
> any changes that need to be made to the API before fully supporting it. The
> keystone team expects that to be a quick transition, as we don't want to
> keep the API hanging in an experimental state. It's really just a safe
> guard to make sure we have the opportunity to use it in another service
> before fully committing to the API. Ultimately, we don't want to
> prematurely mark the API as supported when other services aren't even using
> it yet, and then realize it has issues that could have been fixed prior to
> the adoption phase.
> >
> > # oslo.limit
> >
> > In parallel with the keystone work, we created a new library to aid
> services in consuming limits. Currently, the sole purpose of oslo.limit is
> to abstract project and project hierarchy information away from the
> service, so that services don't have to reimplement client code to
> understand project trees, which could arguably become complex and lead to
> inconsistencies in u-x across services.
> >
> > Ideally, a service should be able to pass some relatively basic
> information to oslo.limit and expect an answer on whether or not usage for
> that claim is valid. For example, here is a project ID, resource name, and
> resource quantity, tell me if this project is over it's associated limit or
> default limit.
> >
> > We're currently working on implementing the enforcement bits of
> oslo.limit, which requires making API calls to keystone in order to
> retrieve the deployed enforcement model, limit information, and project
> hierarchies. Then it needs to reason about those things and calculate usage
> from the service in order to determine if the request claim is valid or
> not. There are patches up for this work, and reviews are always welcome [4].
> >
> > Note that we haven't released oslo.limit yet, but once the basic
> enforcement described above is implemented we will. Then services can
> officially pull it into their code as a dependency and we can work out
> remaining bugs in both keystone and oslo.limit. Once we're confident in
> both the API and the library, we'll bump oslo.limit to version 1.0 at the
> same time we graduate the unified limits API from "experimental" to
> "supported". Note that oslo libraries <1.0 are considered experimental,
> which fits nicely with the unified limit API being experimental as we shake
> out usability issues in both pieces of software.
> >
> > # services
> >
> > Finally, we'll be in a position to start integrating oslo.limit into
> services. I imagine this to be a coordinated effort between keystone, oslo,
> and service developers. I do have a patch up that adds a conceptual
> overview for developers consuming oslo.limit [5], which renders into [6].
> >
> > To be honest, this is going to be a very large piece of work and it's
> going to require a lot of communication. In my opinion, I think we can use
> the first couple iterations to generate some well-written usage
> documentation. Any questions coming from developers in this phase should
> probably be answered in documentation if we want to enable folks to pick
> this up and run with it. Otherwise, I could see 

[openstack-dev] [goals][python3][trove] starting zuul migration for trove

2018-09-11 Thread Doug Hellmann
Here are the zuul migration patches for the trove team's repositories.
Please prioritize these reviews.

+--++---+
| Subject  | Repo   
| Branch|
+--++---+
| remove job settings for trove repositories   | openstack-infra/project-config 
| master|
| import zuul job settings from project-config | openstack/python-troveclient   
| master|
| switch documentation job to new PTI  | openstack/python-troveclient   
| master|
| add python 3.6 unit test job | openstack/python-troveclient   
| master|
| import zuul job settings from project-config | openstack/python-troveclient   
| stable/ocata  |
| import zuul job settings from project-config | openstack/python-troveclient   
| stable/pike   |
| import zuul job settings from project-config | openstack/python-troveclient   
| stable/queens |
| import zuul job settings from project-config | openstack/python-troveclient   
| stable/rocky  |
| fix tox python3 overrides| openstack/trove
| master|
| update pylint to 1.9.2   | openstack/trove
| master|
| make tox -e pylint only run pylint   | openstack/trove
| master|
| import zuul job settings from project-config | openstack/trove
| master|
| switch documentation job to new PTI  | openstack/trove
| master|
| add python 3.6 unit test job | openstack/trove
| master|
| import zuul job settings from project-config | openstack/trove
| stable/ocata  |
| import zuul job settings from project-config | openstack/trove
| stable/pike   |
| import zuul job settings from project-config | openstack/trove
| stable/queens |
| import zuul job settings from project-config | openstack/trove
| stable/rocky  |
| import zuul job settings from project-config | openstack/trove-dashboard  
| master|
| switch documentation job to new PTI  | openstack/trove-dashboard  
| master|
| add python 3.6 unit test job | openstack/trove-dashboard  
| master|
| import zuul job settings from project-config | openstack/trove-dashboard  
| stable/ocata  |
| import zuul job settings from project-config | openstack/trove-dashboard  
| stable/pike   |
| import zuul job settings from project-config | openstack/trove-dashboard  
| stable/queens |
| import zuul job settings from project-config | openstack/trove-dashboard  
| stable/rocky  |
| import zuul job settings from project-config | openstack/trove-specs  
| master|
| import zuul job settings from project-config | openstack/trove-tempest-plugin 
| master|
+--++---+

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][placement] openstack/placement governance switch plan

2018-09-11 Thread Doug Hellmann
Excerpts from melanie witt's message of 2018-09-10 23:42:06 -0600:
> Howdy everyone,
> 
> Those of us involved in the placement extraction process sat down 
> together today to discuss the plan for openstack/placement governance. 
> We agreed on a set of criteria which we will use to determine when we 
> will switch the openstack/placement governance from the compute project 
> to its own project. I'd like to update everyone with a summary of the 
> plan we agreed upon.
> 
> Attendees: Balázs Gibizer, Chris Dent, Dan Smith, Ed Leafe, Eric Fried, 
> Jay Pipes, Matt Riedemann, Melanie Witt, Mohammed Naser, Sylvain Bauza
> 
> The targets we have set are:
> 
> - Devstack/grenade job that executes an upgrade which deploys the 
> extracted placement code
> - Support in one of the deployment tools to deploy extracted placement 
> code (Tripleo)
> - An upgrade job using any deployment tool (this might have to be a 
> manual test by a deployment tool team member if none of the deployment 
> tools have an upgrade job)
> - Implementation of nested vGPU resource support in the xenapi and 
> libvirt drivers
> - Functional test with vGPU resources that verifies reshaping of flat 
> vGPU resources to nested vGPU resources and successful scheduling to the 
> same compute host after reshaping
> - Lab test with real hardware of the same ^ (xenapi and libvirt)
> 
> Once we have achieved these targets, we will switch openstack/placement 
> governance from the compute project to its own project. The 
> placement-core team will flatten nova-core into individual members of 
> placement-core so it may evolve, the PTL of openstack/placement will be 
> the same as the openstack/nova PTL for the remainder of the release 
> cycle, and the electorate for the openstack/placement PTL election for 
> the next release cycle will be determined by the commit history of the 
> extracted placement code repo, probably by date, to include contributors 
> from the previous two release cycles, as per usual.
> 
> Thank you to Mohammed for facilitating the discussion, we really 
> appreciate it.
> 
> Cheers,
> -melanie
> 

This is good news. Thank you all for taking the time to sit down and put
this plan together.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [senlin] Nominations to Senlin Core Team

2018-09-11 Thread liu.xuefeng1
+1 for both







原始邮件



发件人:DucTruong 
收件人:openstack-dev@lists.openstack.org 
日 期 :2018年09月11日 01:00
主 题 :[openstack-dev] [senlin] Nominations to Senlin Core Team


Hi Senlin Core Team,

I would like to nominate 2 new core reviewers for Senlin:

[1] Jude Cross (jucr...@blizzard.com)
[2] Erik Olof Gunnar Andersson (eanders...@blizzard.com)

Jude has been doing a number of reviews and contributed some important
patches to Senlin during the Rocky cycle that resolved locking
problems.

Erik has the most number of reviews in Rocky and has contributed high
quality code reviews for some time.

[1] 
http://stackalytics.com/?module=senlin-group=marks=rocky_id=jucr...@blizzard.com
[2] 
http://stackalytics.com/?module=senlin-group=marks_id=eandersson=rocky

Voting is open for 7 days.  Please reply with your +1 vote in favor or
-1 as a veto vote.

Regards,

Duc (dtruong)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] ODL Fluorine SR0 debian package

2018-09-11 Thread Markou Dimitris
Hello community,



 

The new ODL fluorine SR0 debian package is uploaded to ODL team ppa :  
https://launchpad.net/~odl-team/+archive/ubuntu/fluorine



Regards,



Dimitrios Markou
Software Engineer

SDN/NFV Team
__
Intracom Telecom
19.7 km Markopoulou Ave., Peania, GR 19002
t:   +30 2106677408
f:   +30 2106671887
  mar...@intracom-telecom.com
  www.intracom-telecom.com





















JOIN US

Mobile World Congress

26 Feb-01 Mar

Barcelona, Spain



Mobile World Congress Shanghai

27-29 June
Shanghai, China



Mobile World Congress Americas

12-14 September
Los Angeles, USA



Gitex Technology Week

14-18 October
Dubai, UAE



FutureCom

15-18 October

Sao Paolo, Brazil



AfricaCom

13-15 November

Cape Town, S. Africa









__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr][fuxi] Retiring fuxi* projects

2018-09-11 Thread Daniel Mellado
Hi all,

After having discussed with the project maintainers, we'll be no longer
supporting fuxi, fuxi-golang nor fuxi-kubernetes, and I'll start the
process of retiring this.

We're driving this as a part of the py3 goal and as contributors have no
longer time available to work on these projects.

Thanks for your help so far!

Best!

Daniel


0x13DDF774E05F5B85.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures] Tag of openstack/kuryr-kubernetes failed

2018-09-11 Thread Tony Breeds
On Tue, Sep 11, 2018 at 10:20:52AM +, z...@openstack.org wrote:
> Build failed.
> 
> - publish-openstack-releasenotes 
> http://logs.openstack.org/6c/6ce2f5edd0b3dbb2c7edebca37ccc8219675e189/tag/publish-openstack-releasenotes/85bfc1a/
>  : FAILURE in 4m 45s
> - publish-openstack-releasenotes-python3 
> http://logs.openstack.org/6c/6ce2f5edd0b3dbb2c7edebca37ccc8219675e189/tag/publish-openstack-releasenotes-python3/abd87f9/
>  : SUCCESS in 4m 17s

This looks like the same failure from yesterday which has been fix in
reno but not yet released.  As Doug points out the py3 job passed so the
content is live ;P

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Plan management refactoring for Life cycle

2018-09-11 Thread Bogdan Dobrelya

On 9/11/18 4:43 AM, James Slagle wrote:

On Mon, Sep 10, 2018 at 10:12 AM Jiri Tomasek  wrote:


Hi Mathieu,

Thanks for bringing up the topic. There are several efforts currently in progress 
which should lead to solving the problems you're describing. We are working on 
introducing CLI commands which would perform the deployment configuration operations 
on deployment plan in Swift. This is a main step to finally reach CLI and GUI 
compatibility/interoperability. CLI will perform actions to configure deployment 
(roles, networks, environments selection, parameters setting etc.) by calling Mistral 
workflows which store the information in deployment plan in Swift. The result is that 
all the information which define the deployment are stored in central place - 
deployment plan in Swift and the deploy command is turned into simple 'openstack 
overcloud  deploy'. Deployment plan then has plan-environment.yaml 
which has the list of environments used and customized parameter values, 
roles-data.yaml which carry roles definition and network-data.yaml which carry 
networks definition. The information stored in these files (and deployment plan in 
general) can then be treated as source of information about deployment. The 
deployment can then be easily exported and reliably replicated.

Here is the document which we put together to identify missing pieces between 
GUI,CLI and Mistral TripleO API. We'll use this to discuss the topic at PTG 
this week and define work needed to be done to achieve the complete 
interoperability. [1]

Also there is a pending patch from Steven Hardy which aims to remove CLI 
specific environments merging which should fix the problem with tracking of the 
environments used with CLI deployment. [2]

[1] https://gist.github.com/jtomasek/8c2ae6118be0823784cdafebd9c0edac 
(Apologies for inconvenient format, I'll try to update this to better/editable 
format. Original doc: 
https://docs.google.com/spreadsheets/d/1ERfx2rnPq6VjkJ62JlA_E6jFuHt9vVl3j95dg6-mZBM/edit?usp=sharing)
[2] https://review.openstack.org/#/c/448209/



Related to this work, I'd like to see us store the plan in git instead
of swift. I think this would reduce some of the complexity around plan
management, and move us closer to a simpler undercloud architecture.
It would be nice to see each change to the plan represented as new git
commit, so we can even see the changes to the plan as roles, networks,
services, etc, are selected.

I also think git would provide a familiar experience for both
developers and operators who are already accustomed to devops type
workflows. I think we could make these changes without it impact the
API too much or, hopefully, at all.


+42!
See also the related RFE (drafted only) [0]

[0] https://bugs.launchpad.net/tripleo/+bug/1782139






--
Best regards,
Bogdan Dobrelya,
Irc #bogdando
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] [PTL] PTL on Vacation 17th - 28th September

2018-09-11 Thread Dougal Matthews
Hey all,

I'll be on vacation from 17th to the 28th of September. I don't anticipate
anything coming up but Renat Akhmerov is standing in as PTL while I'm out.

Cheers,
Dougal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] about unified limits

2018-09-11 Thread Jaze Lee
I recommend li...@unitedstack.com to join in to help to work forward.
May be first we should the keystone unified limits api really ok or
something else ?

Lance Bragstad  于2018年9月8日周六 上午2:35写道:
>
> That would be great! I can break down the work a little bit to help describe 
> where we are at with different parts of the initiative. Hopefully it will be 
> useful for your colleagues in case they haven't been closely following the 
> effort.
>
> # keystone
>
> Based on the initial note in this thread, I'm sure you're aware of keystone's 
> status with respect to unified limits. But to recap, the initial 
> implementation landed in Queens and targeted flat enforcement [0]. During the 
> Rocky PTG we sat down with other services and a few operators to explain the 
> current status in keystone and if either developers or operators had feedback 
> on the API specifically. Notes were captured in etherpad [1]. We spent the 
> Rocky cycle fixing usability issues with the API [2] and implementing support 
> for a hierarchical enforcement model [3].
>
> At this point keystone is ready for services to start consuming the unified 
> limits work. The unified limits API is still marked as stable and it will 
> likely stay that way until we have at least one project using unified limits. 
> We can use that as an opportunity to do a final flush of any changes that 
> need to be made to the API before fully supporting it. The keystone team 
> expects that to be a quick transition, as we don't want to keep the API 
> hanging in an experimental state. It's really just a safe guard to make sure 
> we have the opportunity to use it in another service before fully committing 
> to the API. Ultimately, we don't want to prematurely mark the API as 
> supported when other services aren't even using it yet, and then realize it 
> has issues that could have been fixed prior to the adoption phase.
>
> # oslo.limit
>
> In parallel with the keystone work, we created a new library to aid services 
> in consuming limits. Currently, the sole purpose of oslo.limit is to abstract 
> project and project hierarchy information away from the service, so that 
> services don't have to reimplement client code to understand project trees, 
> which could arguably become complex and lead to inconsistencies in u-x across 
> services.
>
> Ideally, a service should be able to pass some relatively basic information 
> to oslo.limit and expect an answer on whether or not usage for that claim is 
> valid. For example, here is a project ID, resource name, and resource 
> quantity, tell me if this project is over it's associated limit or default 
> limit.
>
> We're currently working on implementing the enforcement bits of oslo.limit, 
> which requires making API calls to keystone in order to retrieve the deployed 
> enforcement model, limit information, and project hierarchies. Then it needs 
> to reason about those things and calculate usage from the service in order to 
> determine if the request claim is valid or not. There are patches up for this 
> work, and reviews are always welcome [4].
>
> Note that we haven't released oslo.limit yet, but once the basic enforcement 
> described above is implemented we will. Then services can officially pull it 
> into their code as a dependency and we can work out remaining bugs in both 
> keystone and oslo.limit. Once we're confident in both the API and the 
> library, we'll bump oslo.limit to version 1.0 at the same time we graduate 
> the unified limits API from "experimental" to "supported". Note that oslo 
> libraries <1.0 are considered experimental, which fits nicely with the 
> unified limit API being experimental as we shake out usability issues in both 
> pieces of software.
>
> # services
>
> Finally, we'll be in a position to start integrating oslo.limit into 
> services. I imagine this to be a coordinated effort between keystone, oslo, 
> and service developers. I do have a patch up that adds a conceptual overview 
> for developers consuming oslo.limit [5], which renders into [6].
>
> To be honest, this is going to be a very large piece of work and it's going 
> to require a lot of communication. In my opinion, I think we can use the 
> first couple iterations to generate some well-written usage documentation. 
> Any questions coming from developers in this phase should probably be 
> answered in documentation if we want to enable folks to pick this up and run 
> with it. Otherwise, I could see the handful of people pushing the effort 
> becoming a bottle neck in adoption.
>
> Hopefully this helps paint the landscape of where things are currently with 
> respect to each piece. As always, let me know if you have any additional 
> questions. If people want to discuss online, you can find me, and other 
> contributors familiar with this topic, in #openstack-keystone or 
> #openstack-dev on IRC (nic: lbragstad).
>
> [0] 
>