Re: [openstack-dev] [zaqar] Not run for PTL

2018-02-01 Thread Xiyuan Wang
Thanks for your hard working in Zaqar during these years.  Glad to know
you're still here. ;)

2018-01-23 16:10 GMT+08:00 hao wang :

> Thanks Feilong,  it's very great to work together with you !
>
> 2018-01-23 10:56 GMT+08:00 Fei Long Wang :
> > Hi team,
> >
> > I have been working on Zaqar for more than 4 years and serving the PTL
> > for the past 5 cycles. I don't plan to run for Zaqar PTL again for the
> > Rocky release. I think it's time for somebody else to lead the team for
> > next milestone. It has been a great experience for me and thank you for
> > all the support from the team and the whole community. I will still be
> > around for sure. Thank you.
> >
> > --
> > Cheers & Best regards,
> > Feilong Wang (王飞龙)
> > 
> --
> > Senior Cloud Software Engineer
> > Tel: +64-48032246
> > Email: flw...@catalyst.net.nz
> > Catalyst IT Limited
> > Level 6, Catalyst House, 150 Willis Street, Wellington
> > 
> --
> >
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][neutron-fwaas] Request for inclusion of bug fixes in RC

2018-02-01 Thread Sridar Kandaswamy (skandasw)
Thanks An. The team has been working with An to review and validate these 
changes – we believe we are close to the final version and should be able to 
merge by tomorrow barring any unforeseen surprises. So pls consider adding 
these to the RC as they address some critical issues as outlined below.

Thanks

Sridar

On 2/1/18, 10:12 PM, "a...@vn.fujitsu.com"  wrote:

Hi, 

I would like to request inclusion of the following patches which address 
bugs found in our testing.

https://review.openstack.org/#/c/539461/
Addressing: https://bugs.launchpad.net/neutron/+bug/1746404

'auto_associate_default_firewall_group' got an error when new port is 
created
We started with a CfgOpt to Disable default FWG on ports. This has caused 
issues with Conntrack so this option is being removed. Also on a related note, 
we were mistakenly applying on other ports - so tightened up the validation to 
ensure that it is a VM port.

And
https://review.openstack.org/#/c/536234/
Addressing: https://bugs.launchpad.net/neutron/+bug/1746855

FWaaS V2 failures with Ml2 is Linuxbridge or security group driver is 
iptables_hybrid
We have failures with Linuxbridge as it is not a supported option and if SG 
uses iptables_hybrid driver - we have seen issues which possibly might be 
addressed [1], but with not enough validation we would like to prevent this 
scenario as well. With more testing and addressing any issues we can remove the 
restriction on SG with iptables_hybrid driver in the R release.

[1] https://review.openstack.org/#/c/538154/

Cheers,
An

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][neutron-fwaas] Request for inclusion of bug fixes in RC

2018-02-01 Thread a...@vn.fujitsu.com
Hi, 

I would like to request inclusion of the following patches which address bugs 
found in our testing.

https://review.openstack.org/#/c/539461/
Addressing: https://bugs.launchpad.net/neutron/+bug/1746404

'auto_associate_default_firewall_group' got an error when new port is created
We started with a CfgOpt to Disable default FWG on ports. This has caused 
issues with Conntrack so this option is being removed. Also on a related note, 
we were mistakenly applying on other ports - so tightened up the validation to 
ensure that it is a VM port.

And
https://review.openstack.org/#/c/536234/
Addressing: https://bugs.launchpad.net/neutron/+bug/1746855

FWaaS V2 failures with Ml2 is Linuxbridge or security group driver is 
iptables_hybrid
We have failures with Linuxbridge as it is not a supported option and if SG 
uses iptables_hybrid driver - we have seen issues which possibly might be 
addressed [1], but with not enough validation we would like to prevent this 
scenario as well. With more testing and addressing any issues we can remove the 
restriction on SG with iptables_hybrid driver in the R release.

[1] https://review.openstack.org/#/c/538154/

Cheers,
An

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][ptl] Quality Assurance PTL Candidacy for Rocky

2018-02-01 Thread gmann
 Hi everyone,

I would like to announce my PTL candidacy of the Quality Assurance for
Rocky cycle.

I am glad and lucky to work in OpenStack community and would like to thank
all the contributors who supported me to explore new things and ideas.
You might know me as gmann on IRC. I have joined the OpenStack since 2012
and 100% in upstream development since Ice-house release.
Currently i am contributing in QA, Nova & sometimes other projects too and
a core member in QA projects like Tempest, Patrole etc. Along with that, I
volunteered as mentor in the upstream institute and help to bring new
contributor onboard.

QA program always play a key role in smooth upstream development and its
quality. Also help production clouds for their testing and stability. QA
responsibilities are always challenging and need lot of coordination and
collaboration with each projects team. Till now we have been doing good job
which is not just because of QA team but it's all together a combined
effort from all projects.

Keep having  regular ideas from different contributors to improve QA is
phenomenal and I truly appreciate. Those ideas helped QA program to grow
more in last couple of years. One example is extreme testing of OpenStack
which is in discussion since previous summits/PTG with PoC and will add
strength to QA program. other example is Patrole for RBAC testing which is
really important for cloud security. My concentration as PTL will be to
make good progress in extreme testing or any other new initiatives and make
Patrole more stable and feature-rich so that we can see their tests running
in respective projects gate.

We have plugins framework approach for Tempest/Devstack/Grenade and their
plugins exist across almost all the OpenStack projects, I feel that
collaboration with projects team is the key role of QA team. Even though,
we have been helping and fixing plugins whenever needed but I would like to
improve more in this area. My objective is to make each plugin owners to
use the QA services and interfaces in more better and easy way. I would
like to improve the relationship and coverage of help in every possible way.

Bug Triage and gate stability is the another important area for QA team. We
have been doing good in bug triage since couple of years with target of 0
un-triaged bug. I would like to make sure we continue the focus in both
area in next cycle too.

Along with that let me summarize me the areas I am planning to focus on in
Rocky Cycle.

* Improvement and new Ideas in QA program as overall:
- Improve the testing coverage for key features.

- Improve QA process and tracking in more better way for planned
deliverable.

- New ideas and their progress to convert into a running software.

* Collaboration and Help:
- Cross community collaboration on tool, idea sharing etc, opnfv,
k8s are best example as of now.

- Help the other Projects' developments with test
writing/improvement and gate stability

- Plugin improvement and helping them on daily basis by defining
doable process and goal.

* Bring on more contributor and core reviewers.

Following are my contribution activities:
* http://stackalytics.com/?release=all=marks_
id=ghanshyammann_type=all
* http://stackalytics.com/?release=all=commits;
user_id=ghanshyammann_type=all

Thanks for reading and consideration my candidacy.

-gmann
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][ptl] PTL Candidacy for Rocky

2018-02-01 Thread Rico Lin
Hi All,

I would like to nominate myself to take the role of Heat PTL for Rocky
release.
I'd been involved with the project for two and half years. And it's my
privilege to work and learn from this great team and have the honor to
serve as
Pike and Queens PTL.

With last haft year, team achieves following jobs:

* Policy in code
* Heat dashboard
* Heat tempest plugin
* Zuul migrate in Heat
* New resources/properties
* Gate stable maintenance
* become Interop add-on
* Deprecate/remove few resources

Also done 2 blueprints, 62 bugs fixed (still going) and quite a few non-bug
improvement (like memory improvement, etc.).

I would like to keep trace on above jobs and with some more task that needs
to
be done:

* Needs more reviewers and developers, we got few superman in our team
(thank
  God for that). Still, we need more reviewers and developers than ever.
* goals making and tracing. IMO, it's always a nice thing to make goals at
the
  very first place in a cycle, so all members can jump up and pick it up if
  you somehow fail to keep pushing or got a more critical task to work on.
And
  most important is we can have a way to trace and make sure our team keeps
  been productive(which it already is). We also need to filter and review
  with current community goals to make sure it's not making things worst for
  Heat.
* Cross project co-works. We have some features out within these few
releases
  cycle. Heat team for some reason keeps been tightly co-works with TripleO
  team to sync with what we have (which is super cool). What I also like to
  see if we can get more sync up with other teams who use heat as part of
their
  infra which will potentially give us more feedbacks from multiple
  users/projects.
* Inner team communications. We have faced some communication problem in
this
  cycle, which means as a PTL, I'm responsible to make sure our team have a
  more comfortable workflow to work on. Which means I have to try harder to
  sync up tasks within this team. At least provide team better
communications
  which shouldn't try to take more time for all.

Hope you will consider me for Rocky PTL. Thank you!
Rico Lin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [refstack][ptl] PTL Candidacy for Rocky

2018-02-01 Thread Chris Hoge
I am submitting my self nomination to serve as the RefStack PTL for 
the Rocky development cycle. For the Rocky cycle, I will continue
to focus efforts on moving the RefStack Server and Client into
maintenance mode. Outstanding tasks include:

  * Adding funtionality to upload subunit data for test results.
  * Adding Tempest autoconfiguration to the client.
  * Updating library dependencies.
  * Providing consistent API documentation.

In the previous cycle, the Tempest Autoconfig project was added to
RefStack governance. Another goal of the Rocky cycle is to transition
project leadership to the Tempest Autoconfig team, as this project is
where the majority of future work is going to happen.

Thank you,

Chris Hoge

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Dublin PTG schedule up

2018-02-01 Thread Matthew Oliver
Sweet thanks Thierry,

Only issue is I see what days things are happening, but not what rooms
things are in. Unless I'm failing at reading a table.

Matt

On Fri, Feb 2, 2018 at 8:02 AM, Thierry Carrez 
wrote:

> Hi everyone,
>
> The schedule for the Dublin PTG is now posted on the PTG website:
> https://www.openstack.org/ptg#tab_schedule
>
> I'll post on this thread if anything changes, but it's pretty unlikely
> at this point.
>
> Note that we have a lot of available rooms on Monday/Tuesday to discuss
> additional topics. If you think of something we should really take half
> a day to discuss, please add it to the following etherpad:
>
> https://etherpad.openstack.org/p/PTG-Dublin-missing-topics
>
> If there is consensus it's a good topic and we agree on a time where to
> fit it, we could add it to the schedule.
>
> For smalled things (like 90-min discussions) we can book time
> dynamically during the event thanks to the new PTGbot features.
>
> See you there !
>
> --
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][PTL] Cycle highlights reminder

2018-02-01 Thread Anne Bertucio
Hi all,

With Queens-3 behind us and RC1 coming up, wanted to give a gentle reminder 
about the cycle-highlights. To get the party started, I added an example 
highlight for Cinder, Horizon, Ironic and Nova (modify as necessary!): 
https://review.openstack.org/#/c/540171/ 


Hopefully this is a fairly painless process that comes with the great reward of 
not answering “What changed in this release?” five times over to various 
marketing and press arms. I’m definitely looking to refine how we handle 
release communications, so come find me in Dublin with all your feedback and 
suggestions!

Cheers,
Anne Bertucio
OpenStack Foundation
a...@openstack.org | 206-992-7961




> On Dec 22, 2017, at 1:06 AM, Thierry Carrez  wrote:
> 
> Matt Riedemann wrote:
>> On 12/14/2017 2:24 PM, Sean McGinnis wrote:
>>> Hey all,
>>> 
>>> As we get closer to Queens-3 and our final RCs, I wanted to remind
>>> everyone
>>> about the new 'cycle-highlights' we have added to our deliverable info.
>>> 
>>> Background
>>> --
>>> 
>>> As a reminder on the background, we were finding that a lot of PTLs were
>>> getting pings several times at the end of every release cycle by
>>> various folks
>>> asking for highlights of what was new and what significant changes
>>> were coming
>>> in the new release. It was often the same answer to journalists, product
>>> managers, and others that needed to compile that info.
>>> 
>>> To try to mitigate that somewhat, we've built in the ability to
>>> capture these
>>> highlights as part of the release. It get compiled and published to
>>> the web
>>> site so we have one place to point these folks to. It is intended as a
>>> place
>>> where they can get the basic info they need, not as a complete marketing
>>> message.
>>> 
>>> As you prepare for upcoming releases, please start to consider what
>>> you might
>>> want to show up in this collection. We ideally want just a few
>>> highlights,
>>> probably no more than 3 or 4 in most cases, from each project team.
>>> [...]
> 
>> I didn't see this before the q1 or q2 tags - can the cycle highlights be
>> applied retroactively?
> 
> Cycle highlights are a once-at-the-end-of-the-cycle thing, not a
> per-milestone or per-intermediary-release thing. So you don't need to
> apply anything retroactively for the q1 or q2 milestones.
> 
> Basically near the end of the cycle, you look back at what got done in
> the past 6 months and extract a few key messaging points. Then we build
> a page with all the answers and point all marketing people to it --
> which should avoid duplication of effort in answering a dozen separate
> information requests.
> 
> -- 
> Thierry Carrez (ttx)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org 
> ?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-3, February 3 - 9

2018-02-01 Thread Sean McGinnis
We are already starting on RC week. Time flies when you're having fun.

Development Focus
-

The Release Candidate (RC) deadline is this Thursday, the 8th. Work should be
focused on any release-critical bugs and wrapping up and remaining feature
work.

General Information
---

All cycle-with-milestones and cycle-with-intermediary projects should cut their
stable/queens branch by the end of this week. This branch will track the Queens
release.

Once stable/queens has been created, master will will be ready to switch to
Rocky development. While master will no longer be frozen, please prioritize any
work necessary for completing Queens plans.

Changes can be merged into stable/queens as needed if deemed necessary for an
RC2. Once Queens is released, stable/queens will also be ready for any stable
point releases. Whether fixing something for another RC, or in preparation of a
future stable release, fixes must be merged to master first, then backported to
stable/queens.

Actions
-

cycle-with-milestones deliverables should post an RC1 to openstack/releases
using the version format X.Y.Z.0rc1 along with branch creation from this point.
The deliverable changes should look something like:

  releases:
- projects:
- hash: 90f3ed251084952b43b89a172895a005182e6970
  repo: openstack/example
  version: 1.0.0.0rc1
branches:
  - name: stable/queens
location: 1.0.0.0rc1

Other cycle deliverables (not *-with-milestones) will look the same, but with
your normal versioning.

For deliverables with release notes, you may also want to add, or update, your
release notes links in the deliverable file to something like:

release-notes: https://docs.openstack.org/releasenotes/example/queens.html

And one more reminder, please add what highlights you want for your project
team in the cycle highlights:

http://lists.openstack.org/pipermail/openstack-dev/2017-December/125613.html


Upcoming Deadlines & Dates
--

Rocky PTL nominations: January 29 - February 1
Rocky PTL election: February 7 - 14
OpenStack Summit Vancouver CFP deadline: February 8
Rocky PTG in Dublin: Week of February 26, 2018
Queens cycle-trailing RC deadline: March 1

-- 
Sean McGinnis (smcginnis)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] FFE for delayed libraries

2018-02-01 Thread Monty Taylor

On 02/01/2018 01:47 PM, Matthew Thode wrote:

On 18-02-01 13:44:19, Sean McGinnis wrote:

Due to gate issues and other delays, there's quite a handful of libs that were
not released in time for the requirements freeze.

We now believe we've gotten all libraries processed for the final Queens
releases. In order to reduce the load, we have batches all upper constraints
bumps for these libs into one patch:

https://review.openstack.org/#/c/540105/

This is my official FFE request to have these updates accepted yet for Queens
past the requirements freeze.

If anyone is aware of any issues with these, please bring that to our attention
as soon as possible.

Thanks,
Sean


Affected Updates


update constraint for python-saharaclient to new release 1.5.0
update constraint for instack-undercloud to new release 8.2.0
update constraint for paunch to new release 2.2.0
update constraint for python-mistralclient to new release 3.2.0
update constraint for python-senlinclient to new release 1.7.0
update constraint for pycadf to new release 2.7.0
update constraint for os-refresh-config to new release 8.2.0
update constraint for tripleo-common to new release 8.4.0
update constraint for reno to new release 2.7.0
update constraint for os-net-config to new release 8.2.0
update constraint for os-apply-config to new release 8.2.0
update constraint for os-client-config to new release 1.29.0
update constraint for ldappool to new release 2.2.0
update constraint for aodhclient to new release 1.0.0
update constraint for python-searchlightclient to new release 1.3.0
update constraint for mistral-lib to new release 0.4.0
update constraint for os-collect-config to new release 8.2.0
update constraint for ceilometermiddleware to new release 1.2.0
update constraint for tricircleclient to new release 0.3.0
update constraint for requestsexceptions to new release 1.4.0
update constraint for python-magnumclient to new release 2.8.0
update constraint for tosca-parser to new release 0.9.0
update constraint for python-tackerclient to new release 0.11.0
update constraint for python-heatclient to new release 1.14.0



officially accepted, thanks for keeping me updated while this was going
on.



After the release of openstacksdk 0.11.1, we got a bug report:

https://bugs.launchpad.net/python-openstacksdk/+bug/1746535

about a regression with python-openstackclient and query parameters. The 
fix was written, landed, backported to stable/queens and released.


I'd like to request we add 0.11.2 to the library FFE.

Thanks!
Monty



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ptg] Dublin PTG schedule up

2018-02-01 Thread Thierry Carrez
Hi everyone,

The schedule for the Dublin PTG is now posted on the PTG website:
https://www.openstack.org/ptg#tab_schedule

I'll post on this thread if anything changes, but it's pretty unlikely
at this point.

Note that we have a lot of available rooms on Monday/Tuesday to discuss
additional topics. If you think of something we should really take half
a day to discuss, please add it to the following etherpad:

https://etherpad.openstack.org/p/PTG-Dublin-missing-topics

If there is consensus it's a good topic and we agree on a time where to
fit it, we could add it to the schedule.

For smalled things (like 90-min discussions) we can book time
dynamically during the event thanks to the new PTGbot features.

See you there !

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [api-wg] [api] [cinder] [nova] Support specify action name in request url

2018-02-01 Thread Ed Leafe
On Jan 18, 2018, at 4:07 AM, TommyLike Hu  wrote:

>Recently We found an issue related to our OpenStack action APIs. We 
> usually expose our OpenStack APIs by registering them to our API Gateway (for 
> instance Kong [1]), but it becomes very difficult when regarding to action 
> APIs. We can not register and control them seperately because them all share 
> the same request url which will be used as the identity in the gateway 
> service, not say rate limiting and other advanced gateway features, take a 
> look at the basic resources in OpenStack

We discussed your email at today’s API-SIG meeting [0]. This is an area that is 
always contentious in the RESTful world. Actions, tasks, and state changes are 
not actual resources, and in a pure REST design they should never be part of 
the URL. Instead, you should POST to the actual resource, with the desired 
action in the body. So in your example:

> URL:/volumes/{volume_id}/action
> BODY:{'extend':{}}

the preferred way of achieving this is:

URL: POST /volumes/{volume_id}
BODY: {‘action’: ‘extend’, ‘params’: {}}

The handler for the POST action should inspect the body, and call the 
appropriate method.

Having said that, we realize that a lot of OpenStack services have adopted the 
more RPC-like approach that you’ve outlined. So while we strongly recommend a 
standard RESTful approach, if you have already released an RPC-like API, our 
advice is:

a) avoid having every possible verb in the URL. In other words, don’t use:
  /volumes/{volume_id}/mount
  /volumes/{volume_id}/umount
  /volumes/{volume_id}/extend
This moves you further into RPC-land, and will make updating your API to a more 
RESTful design more difficult.

b) choose a standard term for the item in the URL. In other words, always use 
‘action’ or ‘task’ or whatever else you have adopted. Don’t mix terminology. 
Then pass the action to perform, along with any parameters in the body. This 
will make it easier to transition to a RESTful design by later updating the 
handlers to first inspect the BODY instead of relying upon the URL to determine 
what action to perform.

You might also want to contact the Kong developers to see if there is a way to 
work with a RESTful API design.

-- Ed Leafe

[0] 
http://eavesdrop.openstack.org/meetings/api_sig/2018/api_sig.2018-02-01-16.02.log.html#l-28




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] FFE for delayed libraries

2018-02-01 Thread Matthew Thode
On 18-02-01 13:44:19, Sean McGinnis wrote:
> Due to gate issues and other delays, there's quite a handful of libs that were
> not released in time for the requirements freeze.
> 
> We now believe we've gotten all libraries processed for the final Queens
> releases. In order to reduce the load, we have batches all upper constraints
> bumps for these libs into one patch:
> 
> https://review.openstack.org/#/c/540105/
> 
> This is my official FFE request to have these updates accepted yet for Queens
> past the requirements freeze.
> 
> If anyone is aware of any issues with these, please bring that to our 
> attention
> as soon as possible.
> 
> Thanks,
> Sean
> 
> 
> Affected Updates
> 
> 
> update constraint for python-saharaclient to new release 1.5.0
> update constraint for instack-undercloud to new release 8.2.0
> update constraint for paunch to new release 2.2.0
> update constraint for python-mistralclient to new release 3.2.0
> update constraint for python-senlinclient to new release 1.7.0
> update constraint for pycadf to new release 2.7.0
> update constraint for os-refresh-config to new release 8.2.0
> update constraint for tripleo-common to new release 8.4.0
> update constraint for reno to new release 2.7.0
> update constraint for os-net-config to new release 8.2.0
> update constraint for os-apply-config to new release 8.2.0
> update constraint for os-client-config to new release 1.29.0
> update constraint for ldappool to new release 2.2.0
> update constraint for aodhclient to new release 1.0.0
> update constraint for python-searchlightclient to new release 1.3.0
> update constraint for mistral-lib to new release 0.4.0
> update constraint for os-collect-config to new release 8.2.0
> update constraint for ceilometermiddleware to new release 1.2.0
> update constraint for tricircleclient to new release 0.3.0
> update constraint for requestsexceptions to new release 1.4.0
> update constraint for python-magnumclient to new release 2.8.0
> update constraint for tosca-parser to new release 0.9.0
> update constraint for python-tackerclient to new release 0.11.0
> update constraint for python-heatclient to new release 1.14.0
> 

officially accepted, thanks for keeping me updated while this was going
on.

-- 
Matthew Thode (prometheanfire)


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements] FFE for delayed libraries

2018-02-01 Thread Sean McGinnis
Due to gate issues and other delays, there's quite a handful of libs that were
not released in time for the requirements freeze.

We now believe we've gotten all libraries processed for the final Queens
releases. In order to reduce the load, we have batches all upper constraints
bumps for these libs into one patch:

https://review.openstack.org/#/c/540105/

This is my official FFE request to have these updates accepted yet for Queens
past the requirements freeze.

If anyone is aware of any issues with these, please bring that to our attention
as soon as possible.

Thanks,
Sean


Affected Updates


update constraint for python-saharaclient to new release 1.5.0
update constraint for instack-undercloud to new release 8.2.0
update constraint for paunch to new release 2.2.0
update constraint for python-mistralclient to new release 3.2.0
update constraint for python-senlinclient to new release 1.7.0
update constraint for pycadf to new release 2.7.0
update constraint for os-refresh-config to new release 8.2.0
update constraint for tripleo-common to new release 8.4.0
update constraint for reno to new release 2.7.0
update constraint for os-net-config to new release 8.2.0
update constraint for os-apply-config to new release 8.2.0
update constraint for os-client-config to new release 1.29.0
update constraint for ldappool to new release 2.2.0
update constraint for aodhclient to new release 1.0.0
update constraint for python-searchlightclient to new release 1.3.0
update constraint for mistral-lib to new release 0.4.0
update constraint for os-collect-config to new release 8.2.0
update constraint for ceilometermiddleware to new release 1.2.0
update constraint for tricircleclient to new release 0.3.0
update constraint for requestsexceptions to new release 1.4.0
update constraint for python-magnumclient to new release 2.8.0
update constraint for tosca-parser to new release 0.9.0
update constraint for python-tackerclient to new release 0.11.0
update constraint for python-heatclient to new release 1.14.0

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][heat][mistral][sdk][searchlight][senlin][tacker][tricircle][tripleo] Missing Queens releases

2018-02-01 Thread Sean McGinnis
Just confirming and closing things out. We did not receive any negative
responses to the plan below, so a little earlier today I approved the mentioned
patch and we cut releases and branches for all libs.

The next step if for these new versions to pass CI and get FFEs to raise the
upper constraints for them past our requirements freeze. That official request
will be coming shortly.

Sean

On Wed, Jan 31, 2018 at 03:03:44PM -0600, Sean McGinnis wrote:
> While reviewing Queens release deliverables and preparing missing 
> stable/queens
> branches, we have identified several libraries that have not had any Queens
> releases.
> 
> In the past, we have stated we would force a release for any missing
> deliverables in order to have a clear branching point. We considered tagging
> the base of the stable/pike branch again and starting a new stable/queens
> branch from there, but that doesn't work for several technical reasons the 
> most
> important of which is that the queens release would not include any changes
> that had been backported to stable/pike, and we have quite a few of those. So,
> we are left with 2 choices: do not release these libraries at all for queens,
> or release from HEAD on master. Skipping the releases entirely will make it
> difficult to provide bug fixes in these libraries over the life of the queens
> release so, although it is potentially disruptive, we plan to release from 
> HEAD
> on master. We will rely on the constraints update mechanism to protect the 
> gate
> if the new releases introduce bugs and teams will be able to fix those 
> problems
> on the new stable/queens branch and then release a new version.
> 
> See https://review.openstack.org/#/c/539657/ and the notes below for details 
> of
> what will be tagged.
> 
> ceilometermiddleware
> 
> 
> Mostly doc and CI related changes, but the "Retrieve project id to ignore from
> keystone" commit (e2bf485) looks like it may be important.
> 
> Heat
> 
> 
> heat-translator
> There are quite a few bug fixes and feature changes merged that have not been
> released. It is currently marked with a type of "library", but we will change
> this to "other" and require a release by the end of the cycle (see
> https://review.openstack.org/#/c/539655/ for that change). Based on the README
> description, this appears to be a command line and therefore should maybe have
> a type of "client-library", but "other" would work as far as release process
> goes. Since this is kind of a special command line, perhaps "other" would be
> the correct type going forward, but we will need input from the Heat team on
> that.
> 
> python-heatclient
> Only reno updates, so a new release on master should not be very disruptive.
> 
> tosca-parser
> Several unreleased bug fixes and feature changes. Consumed by heat-translator
> and tacker, so there is some risk in releasing it this late.
> 
> 
> Mistral
> ---
> 
> mistral-lib
> Mostly packaging and build changes, with a couple of fixes. It is used by
> mistral and tripleo-common.
> 
> SDK
> ---
> 
> requestsexceptions
> No changes this cycle. We will branch stable/queens from the same point as
> stable/pike.
> 
> Searchlight
> ---
> 
> python-searchlightclient
> Only doc and g-r changes. Since the risk here is low, we are going to release
> from master and branch from there.
> 
> Senlin
> --
> 
> python-senlinclient
> Just one bug fix. This is a dependency for heat, mistral, openstackclient,
> python-openstackclient, rally, and senlin-dashboard. The one bug fix looks
> fairly safe though, so we are going to release from master and branch from
> there.
> 
> Tacker
> --
> 
> python-tackerclient
> Many feature changes and bug fixes. This impacts mistral and tacker.
> 
> Tricircle
> -
> 
> python-tricircleclient
> One feature and several g-r changes.
> 
> 
> Please respond here, comment on the patch, or hit us up in #openstack-release
> if you have any questions or concerns.
> 
> Thanks,
> Sean McGinnis (smcginnis)
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs][ptl] PTL candidacy for Docs

2018-02-01 Thread Petr Kovar
Hi all,

I'd like to announce my candidacy for PTL of the Docs project for Rocky.

I've been the Docs PTL since Queens and besides my work on OpenStack docs, I
also contribute to the RDO Project.

During the Queens cycle, we mostly finalized our work on project docs
migration, we also continued assisting project teams with their setup for
project-specific content, we improved our template system for
docs.openstack.org, stopped unpublishing EOL content, and more.

We now also have a docs mission statement to help us identify project goals
within a broader OpenStack context. For Rocky, we need to review and
revisit the team goals and continue working on areas like docs theme and
build automation, alongside the content restructure and rework of what is
left in openstack-manuals.

Our Rocky PTG planning is well underway but I think it is now more important
than ever that we keep the project as open as possible to all potential
documentation contributors, regardless of whether they attend in-person
events or not, this also includes drive-by contributions.

Thank you,
pk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][all] New Zuul Depends-On syntax

2018-02-01 Thread James E. Blair
Zane Bitter  writes:

> Yeah, it's definitely nice to have that flexibility. e.g. here is a
> patch that wouldn't merge for 3 months because the thing it was
> dependent on also got proposed as a backport:
>
> https://review.openstack.org/#/c/514761/1
>
> From an OpenStack perspective, it would be nice if a Gerrit ID implied
> a change from the same Gerrit instance as the current repo and the
> same branch as the current patch if it exists (otherwise any branch),
> and we could optionally use a URL instead to select a particular
> change.

Yeah, that's reasonable, and it is similar to things Zuul does in other
areas, but I think one of the thing we want to do with Depends-On is
consider that Zuul isn't the only audience.  It's there just as much for
the reviewers, and other folks.  So when it comes to Gerrit change ids,
I feel we had to constrain it to Gerrit's own behavior.  When you click
on one of those in Gerrit, it shows you all of the changes across all of
the repos and branches with that change-id.  So that result list is what
Zuul should work with.  Otherwise there's a discontinuity between what a
user sees when they click the hyperlink under the change-id and what
Zuul does.

Similarly, in the new system, you click the URL and you see what Zuul is
going to use.

And that leads into the reason we want to drop the old syntax: to make
it seamless for a GitHub user to know how to Depends-On a Gerrit change,
and vice versa, with neither requiring domain-specific knowledge about
the system.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][all] New Zuul Depends-On syntax

2018-02-01 Thread Zane Bitter

On 25/01/18 19:08, James E. Blair wrote:

Mathieu Gagné  writes:


On Thu, Jan 25, 2018 at 3:55 PM, Ben Nemec  wrote:



I'm curious what this means as far as best practices for inter-patch
references.  In the past my understanding was the the change id was
preferred, both because if gerrit changed its URL format the change id links
would be updated appropriately, and also because change ids can be looked up
offline in git commit messages.  Would that still be the case for everything
except depends-on now?


Yes, that's a down-side of URLs.  I personally think it's fine to keep
using change-ids for anything other than Depends-On, though in many of
those cases the commit sha may work as well.


That's my concern too. Also AFAIK, Change-Id is branch agnostic. This
means you can more easily cherry-pick between branches without having
to change the URL to match the new branch for your dependencies.


Yes, there is a positive and negative aspect to this issue.

On the one hand, for those times where it was convenient to say "depend
on this change in all its forms across all branches of all projects",
one must now add a URL for each.

On the other hand, with URLs, it is now possible to indicate that a
change specifically depends on another change targeted to one branch, or
targeted to several branches.  Simply list each URL (or don't) as
appropriate.  That wasn't possible before -- it wall all or none.


Yeah, it's definitely nice to have that flexibility. e.g. here is a 
patch that wouldn't merge for 3 months because the thing it was 
dependent on also got proposed as a backport:


https://review.openstack.org/#/c/514761/1

From an OpenStack perspective, it would be nice if a Gerrit ID implied 
a change from the same Gerrit instance as the current repo and the same 
branch as the current patch if it exists (otherwise any branch), and we 
could optionally use a URL instead to select a particular change.


It's not obvious to me that that'd be the wrong thing for a tool that 
works across multiple Gerrit instances and/or other backends either, but 
I'm sure y'all have thought about it in more depth than I have.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron-lbaas][octavia]Octavia request poll interval not respected

2018-02-01 Thread Michael Johnson
Hi Mihaela,

The polling logic that the neutron-lbaas octavia driver uses to update
the neutron database is as follows:

Once a Create/Update/Delete action is executed against a load balancer
using the Octavia driver a polling thread is created.
On every request_poll_interval the thread queries the Octavia v1 API
to check the status of the object modified.
It will save the updated state in the neutron databse and exit if the
objects provisioning status becomes on of: "ACTIVE", "DELETED", or
"ERROR".
It will repeat this polling until one of those provisioning statuses
is met, or the request_poll_timeout is exceeded.

My suspicion is the GET requests you are seeing for those objects is
occurring from another source.
You can test this by running neutron-lbaas in debug mode.  I will then
log a debug message for every polling interval.

The code for this thread is located here:
https://github.com/openstack/neutron-lbaas/blob/stable/ocata/neutron_lbaas/drivers/octavia/driver.py#L66

Michael

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures] [i18n][senlin][release] Tag of openstack/python-senlinclient failed

2018-02-01 Thread Doug Hellmann
Excerpts from zuul's message of 2018-02-01 17:03:06 +:
> Build failed.
> 
> - publish-openstack-releasenotes 
> http://logs.openstack.org/f8/f84d8220a3df4421c1cfa7ee7b1e551b57c3505d/tag/publish-openstack-releasenotes/49c0e16/
>  : POST_FAILURE in 5m 48s
> 

This failure to build the senlin client release notes appears to
have something to do with the internationalization setup. It is
looking for a CSS file under the fr translation, for some reason.
Perhaps this is related to the race condition we know that the
publish jobs have?

Doug

rsync: failed to set permissions on 
"/afs/.openstack.org/docs/releasenotes/python-senlinclient/fr/_static/css/.bootstrap.css.nwixts":
 No such file or directory (2)
rsync: rename 
"/afs/.openstack.org/docs/releasenotes/python-senlinclient/fr/_static/css/.bootstrap.css.nwixts"
 -> "fr/_static/css/bootstrap.css": No such file or directory (2)
rsync error: some files/attrs were not transferred (see previous errors) (code 
23) at main.c(1183) [sender=3.1.1]
Traceback (most recent call last):
 File "/tmp/ansible_bq6ijf9y/ansible_module_zuul_afs.py", line 115, in 
   main()
 File "/tmp/ansible_bq6ijf9y/ansible_module_zuul_afs.py", line 110, in main
   output = afs_sync(p['source'], p['target'])
 File "/tmp/ansible_bq6ijf9y/ansible_module_zuul_afs.py", line 95, in afs_sync
   output['output'] = subprocess.check_output(shell_cmd, shell=True)
 File "/usr/lib/python3.5/subprocess.py", line 626, in check_output
   **kwargs).stdout
 File "/usr/lib/python3.5/subprocess.py", line 708, in run
   output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '/bin/bash -c "mkdir -p 
/afs/.openstack.org/docs/releasenotes/python-senlinclient/ && /usr/bin/rsync 
-rtp --safe-links --delete-after --out-format='<>%i %n%L' 
--filter='merge /tmp/tmp9i7el2ow' 
/var/lib/zuul/builds/49c0e164949c43b68c05856f6cc6452e/work/artifacts/ 
/afs/.openstack.org/docs/releasenotes/python-senlinclient/"' returned non-zero 
exit status 23


___
Release-job-failures mailing list
release-job-failu...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs] Core review stats for February

2018-02-01 Thread Petr Kovar
Hi all,

This is more of an FYI for people interested in all things docs that the
docs core team agreed on opening up the process for new docs core
nominations or removals. Instead of using a private list, this will now be
discussed in public, using the openstack-dev list, as documented here:

https://docs.openstack.org/doc-contrib-guide/docs-review.html#achieving-core-reviewer-status

The docs core team is the core for openstack-manuals, openstackdocstheme,
and openstack-doc-tools, and, as a group member, also for subteam repos
organized under the Docs project, such as contributor-guide or security-doc.

For February, I don't recommend any changes to the core team, which is now
pretty stable. If you have any suggestions, please let us know, preferably,
in this thread.

Thanks,
pk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-sig/news

2018-02-01 Thread michael mccune

Greetings OpenStack community,

Today's meeting was primarily focused on a request for guidance related 
to action endpoints and on planning topics for the upcoming PTG.


Tommy Hu has sent an email to the developer list[7] describing how 
several types of actions are currently being handled through the cinder 
and nova REST interfaces. In specific this is related to how APIs are 
registered with a gateway service. The current methodology within cinder 
and nova has been to use generic action endpoints, allowing the body of 
the request to further define the action. These overloaded endpoints 
cause difficulty when using an API gateway. The SIG has taken up 
discussion about how this could be improved and what guidance can be 
created for the community. Although no firm plan has been derived yet, 
the SIG will join the conversation on the mailing list and also discuss 
the wider topic of actions at the PTG.


On the topic of the PTG, the SIG has created an etherpad[8] where agenda 
items are starting to be proposed. If you have any topic that you would 
like to discuss, or see discussed, please add it to that etherpad.


As always if you're interested in helping out, in addition to coming to 
the meetings, there's also:


* The list of bugs [5] indicates several missing or incomplete guidelines.
* The existing guidelines [2] always need refreshing to account for 
changes over time. If you find something that's not quite right, submit 
a patch [6] to fix it.
* Have you done something for which you think guidance would have made 
things easier but couldn't find any? Submit a patch and help others [6].


# Newly Published Guidelines

None this week.

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

None this week.

# Guidelines Currently Under Review [3]

* Add guideline on exposing microversions in SDKs
  https://review.openstack.org/#/c/532814/

* A (shrinking) suite of several documents about doing version and 
service discovery

  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet ready 
for review)

  https://review.openstack.org/444892

* WIP: Add API-schema guide (still being defined)
  https://review.openstack.org/#/c/524467/

# Highlighting your API impacting issues

If you seek further review and insight from the API SIG about APIs that 
you are developing or changing, please address your concerns in an email 
to the OpenStack developer mailing list[1] with the tag "[api]" in the 
subject. In your email, you should include any relevant reviews, links, 
and comments to help guide the discussion of the specific challenge you 
are facing.


To learn more about the API SIG mission and the work we do, see our wiki 
page [4] and guidelines [2].


Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] 
https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z

[4] https://wiki.openstack.org/wiki/API_SIG
[5] https://bugs.launchpad.net/openstack-api-wg
[6] https://git.openstack.org/cgit/openstack/api-wg
[7] 
http://lists.openstack.org/pipermail/openstack-dev/2018-January/126334.html

[8[ https://etherpad.openstack.org/p/api-sig-ptg-rocky


Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_sig/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo]Testing ironic in the overcloud

2018-02-01 Thread Emilien Macchi
On Thu, Feb 1, 2018 at 8:05 AM, Derek Higgins  wrote:
[...]

> o Should I create a new tempest test for baremetal as some of the
>>> networking stuff is different?
>>>
>>
>> I think we would need to run baremetal tests for this new featureset, see
>> existing files for examples.
>>
> Do you mean that we should use existing tests somewhere or create new
> ones?
>

I mean we should use existing tempest tests from ironic, etc. Maybe just a
baremetal scenario that spawn a baremetal server and test ssh into it, like
we already have with other jobs.

o Is running a script on the controller with NodeExtraConfigPost the best
>>> way to set this up or should I be doing something with quickstart? I don't
>>> think quickstart currently runs things on the controler does it?
>>>
>>
>> What kind of thing do you want to run exactly?
>>
> The contents to this file will give you an idea, somewhere I need to setup
> a node that ironic will control with ipmi
> https://review.openstack.org/#/c/485261/19/ci/common/vbmc_setup.yaml
>

extraconfig works for me in that case, I guess. Since we don't productize
this code and it's for CI only, it can live here imho.

Thanks,
-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova][ironic][heat] Do we want a BM/VM room at the PTG?

2018-02-01 Thread Rico Lin
>
> Fair point. When the "VM/baremetal workgroup" was originally formed,
> the goal was more about building clouds with both types of resources,
> making them behave similarly from a user perspective, etc. Somehow
> we got into talking applications and these other topics came up, which
> seemed more interesting/pressing to fix. :)
>
> Maybe "cross-project identity integration" or something is a better name?

Cloud-Native Application IMO is one of the ways to see the flow for both
VM/Baremetal.
But  It's true if we can have more specific goal coss project to make sure
we're marching to that goal (which `VM/baremetal workgroup` formed for)
will be even better.
Instead of modifying the name, I do prefer if we can spend some time to
trace current flow and come out with specific targets for teams to work on
in rocky to allow building both types of resources and feel like same flow
to user, and which of cause includes what keystone already started. So
other than topics Collen mentioned above (and I think they all great), we
should focus working on what topics we can comes out from here (I think
that's why Collen start this ML). Ideas?




-- 
May The Force of OpenStack Be With You,

*Rico Lin*irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo]Testing ironic in the overcloud

2018-02-01 Thread Derek Higgins
On 1 February 2018 at 15:36, Emilien Macchi  wrote:

>
>
> On Thu, Feb 1, 2018 at 6:35 AM, Derek Higgins  wrote:
>
>> Hi All,
>>I've been working on a set of patches as a WIP to test ironic in the
>> overcloud[1], the approach I've started with is to add ironic into the
>> overcloud controller in scenario004. Also to run a script on the controller
>> (as a NodeExtraConfigPost) that sets up a VM with vbmc that can then be
>> controlled by ironic. The WIP currently replaces the current tempest tests
>> with some commands to sanity test the setup. This essentially works but
>> things need to be cleaned up a bit so I've a few questions
>>
>> o Is scenario004 the correct choice?
>>
>
> Because we might increase the timeout risk on scenario004, I would
> recommend to create a new dedicated scenario that would deploy a very basic
> overcloud with just ironic + dependencies (keystone, glance, neutron, and
> nova?)
>

Ok, I can do this



>
>
>>
>> o Should I create a new tempest test for baremetal as some of the
>> networking stuff is different?
>>
>
> I think we would need to run baremetal tests for this new featureset, see
> existing files for examples.
>
Do you mean that we should use existing tests somewhere or create new ones?



>
>
>>
>> o Is running a script on the controller with NodeExtraConfigPost the best
>> way to set this up or should I be doing something with quickstart? I don't
>> think quickstart currently runs things on the controler does it?
>>
>
> What kind of thing do you want to run exactly?
>
The contents to this file will give you an idea, somewhere I need to setup
a node that ironic will control with ipmi
https://review.openstack.org/#/c/485261/19/ci/common/vbmc_setup.yaml


> I'll let the CI squad replies as well but I think we need a new scenario,
> that we would only run when touching ironic files in tripleo. Using
> scenario004 really increase the risk of timeout and we don't want it.
>
Ok




>
> Thanks for this work!
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] LTS pragmatic example

2018-02-01 Thread Matt Riedemann

On 2/1/2018 2:56 AM, Saverio Proto wrote:

Hello !

thanks for accepting the patch :)

It looks like the best is always to send an email and have a short
discussion together, when we are not sure about a patch.

thank you

Cheers,

Saverio



There is also the #openstack-stable IRC channel if you want to get a 
faster response without having to go to the mailing list. Feel free to 
ping me there anytime about stable patch questions.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Queens RC review dashboard

2018-02-01 Thread Lance Bragstad
Hey all,

Just like with feature freeze, I put together a review dashboard that
contains patches we need to land in order to cut a release candidate
[0]. I'll be adding more patches throughout the day, but so far there
are 21 changes there waiting for review. If there is something I missed,
please don't hesitate to ping me and I'll get it added. Thanks for all
the hard work. We're on the home stretch!

[0] https://goo.gl/XVw3wr




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo]Testing ironic in the overcloud

2018-02-01 Thread Emilien Macchi
On Thu, Feb 1, 2018 at 6:35 AM, Derek Higgins  wrote:

> Hi All,
>I've been working on a set of patches as a WIP to test ironic in the
> overcloud[1], the approach I've started with is to add ironic into the
> overcloud controller in scenario004. Also to run a script on the controller
> (as a NodeExtraConfigPost) that sets up a VM with vbmc that can then be
> controlled by ironic. The WIP currently replaces the current tempest tests
> with some commands to sanity test the setup. This essentially works but
> things need to be cleaned up a bit so I've a few questions
>
> o Is scenario004 the correct choice?
>

Because we might increase the timeout risk on scenario004, I would
recommend to create a new dedicated scenario that would deploy a very basic
overcloud with just ironic + dependencies (keystone, glance, neutron, and
nova?)


>
> o Should I create a new tempest test for baremetal as some of the
> networking stuff is different?
>

I think we would need to run baremetal tests for this new featureset, see
existing files for examples.


>
> o Is running a script on the controller with NodeExtraConfigPost the best
> way to set this up or should I be doing something with quickstart? I don't
> think quickstart currently runs things on the controler does it?
>

What kind of thing do you want to run exactly?
I'll let the CI squad replies as well but I think we need a new scenario,
that we would only run when touching ironic files in tripleo. Using
scenario004 really increase the risk of timeout and we don't want it.

Thanks for this work!
-- 
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo]Testing ironic in the overcloud

2018-02-01 Thread Derek Higgins
Hi All,
   I've been working on a set of patches as a WIP to test ironic in the
overcloud[1], the approach I've started with is to add ironic into the
overcloud controller in scenario004. Also to run a script on the controller
(as a NodeExtraConfigPost) that sets up a VM with vbmc that can then be
controlled by ironic. The WIP currently replaces the current tempest tests
with some commands to sanity test the setup. This essentially works but
things need to be cleaned up a bit so I've a few questions

o Is scenario004 the correct choice?

o Should I create a new tempest test for baremetal as some of the
networking stuff is different?

o Is running a script on the controller with NodeExtraConfigPost the best
way to set this up or should I be doing something with quickstart? I don't
think quickstart currently runs things on the controler does it?

thanks,
Derek.

[1] - https://review.openstack.org/#/c/485261
  https://review.openstack.org/#/c/509728/
  https://review.openstack.org/#/c/509829/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage][ptl] PTL candidacy for Rocky

2018-02-01 Thread Afek, Ifat (Nokia - IL/Kfar Sava)
Sorry for the copy problem in the previous email… 
Ifat


On 01/02/2018, 15:24, "Afek, Ifat (Nokia - IL/Kfar Sava)"  
wrote:

Hi all,

I would like to announce my candidacy to continue as Vitrage PTL for the 
Rocky
release.

I’ve been the PTL of Vitrage since the day it started, in the Mitaka 
release.
I think we have made an amazing journey, and we now have a mature, stable 
and
well known project. During the Queens cycle our community has grown, and we
managed to complete many important tasks, like:

* API for template add and template delete
* Enhancements in the templates language
* API for registering web hooks on Vitrage alarms
* Performance enhancements, mostly around parallel evaluation of the 
templates

I believe that these new features will greatly improve the usability of
Vitrage.

As for the Rocky cycle, I think we have many challenging tasks in our road 
map.
We have a great team which combines very experienced contributors and
enthusiastic newcomers, and we are always happy to welcome new contributors.

The issues that I think we should focus on are:

* Alarm and resource aggregation
* Proactive RCA (Root Cause Analysis)
* RCA history
* Kubernetes Support
* API enhancements, mostly around the topology queries

I look forward to working with you all in the coming cycle.

Thanks,
Ifat.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron-lbaas][octavia]Octavia request poll interval not respected

2018-02-01 Thread mihaela.balas
Hello,

I have the following setup:
Neutron - Newton version
Octavia - Ocata version

Neutron LBaaS had the following configuration in services_lbaas.conf:

[octavia]

..
# Interval in seconds to poll octavia when an entity is created, updated, or
# deleted. (integer value)
request_poll_interval = 2

# Time to stop polling octavia when a status of an entity does not change.
# (integer value)
request_poll_timeout = 300



However, neutron-lbaas seems not to respect the request poll interval and it 
takes about 15 minutes to create a load balancer+listener+pool+members+hm. 
Below, you have the timestamps for the API calls made by neutron towards 
Octavia (extracted with tcpdump when I create a load balancer from horizon GUI):

10.100.0.14 - - [01/Feb/2018 12:11:53] "POST /v1/loadbalancers HTTP/1.1" 202 437
10.100.0.14 - - [01/Feb/2018 12:11:54] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 430
10.100.0.14 - - [01/Feb/2018 12:11:58] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 447
10.100.0.14 - - [01/Feb/2018 12:12:00] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 447
10.100.0.14 - - [01/Feb/2018 12:14:12] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 438
10.100.0.14 - - [01/Feb/2018 12:16:23] "POST 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380/listeners HTTP/1.1" 202 
445
10.100.0.14 - - [01/Feb/2018 12:16:23] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 446
10.100.0.14 - - [01/Feb/2018 12:18:32] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 438
10.100.0.14 - - [01/Feb/2018 12:18:37] "POST 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380/pools HTTP/1.1" 202 318
10.100.0.14 - - [01/Feb/2018 12:18:37] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 446
10.100.0.14 - - [01/Feb/2018 12:20:46] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 438
10.100.0.14 - - [01/Feb/2018 12:23:00] "POST 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380/pools/ea11699e-3fff-445c-8dd0-2acfbff69c9c/members
 HTTP/1.1" 202 317
10.100.0.14 - - [01/Feb/2018 12:23:00] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 446
10.100.0.14 - - [01/Feb/2018 12:23:05] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 438
10.100.0.14 - - [01/Feb/2018 12:23:08] "POST 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380/pools/ea11699e-3fff-445c-8dd0-2acfbff69c9c/members
 HTTP/1.1" 202 316
10.100.0.14 - - [01/Feb/2018 12:23:08] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 446
10.100.0.14 - - [01/Feb/2018 12:25:20] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 438
10.100.0.14 - - [01/Feb/2018 12:25:23] "POST 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380/pools/ea11699e-3fff-445c-8dd0-2acfbff69c9c/healthmonitor
 HTTP/1.1" 202 215
10.100.0.14 - - [01/Feb/2018 12:27:30] "GET 
/v1/loadbalancers/8c734a97-f9a4-4120-8ba8-cc69b44ff380 HTTP/1.1" 200 437

It seems that, after 1 or 2 polls, it waits for more than two minutes until the 
next poll. Is it normal? Has anyone seen this behavior?

Thank you,
Mihaela Balas

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage][ptl] PTL candidacy for Rocky

2018-02-01 Thread Afek, Ifat (Nokia - IL/Kfar Sava)
Hi all,

I would like to announce my candidacy to continue as Vitrage PTL for the Rocky
release.

I’ve been the PTL of Vitrage since the day it started, in the Mitaka release.
I think we have made an amazing journey, and we now have a mature, stable and
well known project. During the Queens cycle our community has grown, and we
managed to complete many important tasks, like:

* API for template add and template delete
* Enhancements in the templates language
* API for registering web hooks on Vitrage alarms
* Performance enhancements, mostly around parallel evaluation of the templates

I believe that these new features will greatly improve the usability of
Vitrage.

As for the Rocky cycle, I think we have many challenging tasks in our road map.
We have a great team which combines very experienced contributors and
enthusiastic newcomers, and we are always happy to welcome new contributors.

The issues that I think we should focus on are:

* Alarm and resource aggregation
* Proactive RCA (Root Cause Analysis)
* RCA history
* Kubernetes Support
* API enhancements, mostly around the topology queries

Hi all,

I would like to announce my candidacy to continue as Vitrage PTL for the Rocky
release.

I’ve been the PTL of Vitrage since the day it started, in the Mitaka release.
I think we have made an amazing journey, and we now have a mature, stable and
well known project. During the Queens cycle our community has grown, and we
managed to complete many important tasks, like:

* API for template add and template delete
* Enhancements in the templates language
* API for registering web hooks on Vitrage alarms
* Performance enhancements, mostly around parallel evaluation of the templates

I believe that these new features will greatly improve the usability of
Vitrage.

As for the Rocky cycle, I think we have many challenging tasks in our road map.
We have a great team which combines very experienced contributors and
enthusiastic newcomers, and we are always happy to welcome new contributors.

The issues that I think we should focus on are:

* Alarm and resource aggregation
* Proactive RCA (Root Cause Analysis)
* RCA history
* Kubernetes Support
* API enhancements, mostly around the topology queries

I look forward to working with you all in the coming cycle.

Thanks,
Ifat.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] PTG Dublin - Price Increase this Thursday

2018-02-01 Thread Thierry Carrez
Reminder: Last hours to pick up your PTG ticket at normal price !

Kendall Waters wrote:
> Hi everyone,
> 
> We are four weeks out from the Dublin Project Teams Gathering (February
> 26 - March 2nd), and we are expecting the event to sell out! You have
> two more days to book your ticket at the normal price. We'll switch to
> last-minute price (USD $200) on Thursday, February 1st at 12 noon CT
> (18:00 UTC). So go and grab your ticket before the price increases! [1]
> 
> Cheers,
> Kendall
> 
> [1] https://rockyptg.eventbrite.com

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] opendaylight OpenDaylightConnectionProtocol deprecation issue

2018-02-01 Thread Moshe Levi


> -Original Message-
> From: Ben Nemec [mailto:openst...@nemebean.com]
> Sent: Wednesday, January 31, 2018 5:10 PM
> To: OpenStack Development Mailing List (not for usage questions)
> ; Moshe Levi
> 
> Subject: Re: [openstack-dev] [tripleo] opendaylight
> OpenDaylightConnectionProtocol deprecation issue
> 
> 
> 
> On 01/29/2018 04:27 AM, Moshe Levi wrote:
> > Hi all,
> >
> > It seem that this commit [1] deprecated the
> > OpenDaylightConnectionProtocol, but it also remove it.
> >
> > This is causing the following issue when we deploy opendaylight non
> > containerized. See [2]
> >
> > One solution is to add back the OpenDaylightConnectionProtocol [3] the
> > other solution is to remove the OpenDaylightConnectionProtocol from
> > the deprecated parameter_groups [4].
> 
> Looks like the deprecation was done incorrectly.  The parameter should have
> been left in place and referenced in the deprecated group.  So I think the fix
> would just be to put the parameter definition back.
Ok I proposed this fix https://review.openstack.org/#/c/539917/  to resolve it. 


> 
> >
> > [1] -
> >
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit
> > hub.com%2Fopenstack%2Ftripleo-heat-
> templates%2Fcommit%2Faf4ce05dc5270b
> >
> 84864a382ddb2a1161d9082eab=02%7C01%7Cmoshele%40mellanox.co
> m%7C5d3
> >
> 64a8250a14e007a2608d568bca27e%7Ca652971c7d2e4d9ba6a4d149256f461b%
> 7C0%7
> >
> C0%7C636530081767314146=gNmuv%2FzlusnYp7TXI6t9dFIRbPRC2MDj
> F5yoxa
> > ktGGE%3D=0
> >
> >
> > [2] -
> >
> https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fpast
> >
> e.openstack.org%2Fshow%2F656702%2F=02%7C01%7Cmoshele%40m
> ellanox.c
> >
> om%7C5d364a8250a14e007a2608d568bca27e%7Ca652971c7d2e4d9ba6a4d14
> 9256f46
> >
> 1b%7C0%7C0%7C636530081767314146=AMfcY2FcaOm8lZNMOu3iYKYf
> 4ecjgP18
> > 6im32Ujg1tE%3D=0
> >
> > [3] -
> >
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit
> > hub.com%2Fopenstack%2Ftripleo-heat-
> templates%2Fcommit%2Faf4ce05dc5270b
> > 84864a382ddb2a1161d9082eab%23diff-
> 21674daa44a327c016a80173efeb10e7L20&
> >
> data=02%7C01%7Cmoshele%40mellanox.com%7C5d364a8250a14e007a2608d
> 568bca2
> >
> 7e%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636530081767314
> 146
> >
> ta=5pyiYUINi%2FmQ1%2F19kaIJY2KQ35bHhbZ%2Fq7PvUnRFZP4%3D
> ed=0
> >
> >
> > [4] -
> >
> https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit
> > hub.com%2Fopenstack%2Ftripleo-heat-
> templates%2Fcommit%2Faf4ce05dc5270b
> > 84864a382ddb2a1161d9082eab%23diff-
> 21674daa44a327c016a80173efeb10e7R112
> >
> =02%7C01%7Cmoshele%40mellanox.com%7C5d364a8250a14e007a260
> 8d568bca
> >
> 27e%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C63653008176731
> 4146
> >
> ata=uUi6XJbZs6LOuGkqDD%2BWTLaZvso8U7srwbL%2BKXvGm44%3D
> ved=0
> >
> >
> >
> >
> >
> __
> 
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >
> https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.
> openstack.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack-
> dev=02%7C01%7Cmoshele%40mellanox.com%7C5d364a8250a14e007a
> 2608d568bca27e%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636
> 530081767314146=m2R5u%2FXA6SnPFk%2FHW13W%2BCYtMbGUI9p
> Ww%2B3U2qFmUaw%3D=0
> >

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Requesting eyes on fix for bug 1686703

2018-02-01 Thread Matthew Booth
On 31 January 2018 at 16:32, Matt Riedemann  wrote:

> On 1/31/2018 7:30 AM, Matthew Booth wrote:
>
>> Could I please have some eyes on this bugfix:
>> https://review.openstack.org/#/c/462521/ . I addressed an issue raised
>> in August 2017, and it's had no negative feedback since. It would be good
>> to get this one finished.
>>
>
> First, I'd like to avoid setting a precedent of asking for reviews in the
> ML. So please don't do this.
>

I don't generally do this, but I think a polite request after 6 months or
so is reasonable when something has fallen through the cracks.


> Second, this is a latent issue, and we're less than two weeks to RC1, so
> I'd prefer that we hold this off until Rocky opens up in case it introduces
> any regressions so we at least have time to deal with those when we're not
> in stop-ship mode.
>

That's fine. Looks like I have new feedback to address in the meantime
anyway,

Matt
-- 
Matthew Booth
Red Hat OpenStack Engineer, Compute DFG

Phone: +442070094448 (UK)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] LTS pragmatic example

2018-02-01 Thread Saverio Proto
Hello !

thanks for accepting the patch :)

It looks like the best is always to send an email and have a short
discussion together, when we are not sure about a patch.

thank you

Cheers,

Saverio

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sdk][ptl] PTL Candidacy for Rocky

2018-02-01 Thread Sławomir Kapłoński
Big +1 from me :)

— 
Best regards
Slawek Kaplonski
sla...@kaplonski.pl



> Wiadomość napisana przez Monty Taylor  w dniu 
> 31.01.2018, o godz. 16:54:
> 
> Hi everybody!
> 
> I'd like to run for PTL of OpenStackSDK again
> 
> This last cycle was pretty exciting. We merged the shade and openstacksdk 
> projects into a single team. We shifted os-client-config to that team as 
> well. We merged the code from shade and os-client-config into openstacksdk, 
> and then renamed the team.
> 
> It wasn't just about merging projects though. We got some rework done to base 
> the Proxy classes on keystoneauth Adapters providing direct passthrough REST 
> availability for services. We finished the Resource2/Proxy2 transition. We 
> updated pagination to work for all of the OpenStack services - and in the 
> process uncovered a potential cross-project goal. And we tied services in 
> openstacksdk to services listed in the Service Types Authority.
> 
> Moving forward, there's tons to do.
> 
> First and foremost we need to finish integrating the shade code into the sdk 
> codebase. The sdk layer and the shade layer are currently friendly but 
> separate, and that doesn't make sense long term. To do this, we need to 
> figure out a plan for rationalizing the return types - shade returns 
> munch.Munch objects which are dicts that support object attribute access. The 
> sdk returns Resource objects.
> 
> There are also multiple places where the logic in the shade layer can and 
> should move into the sdk's Proxy layer. Good examples of this are swift 
> object uploads and downloads and glance image uploads.
> 
> I'd like to move masakari and tricircle's out-of-tree SDK classes in tree.
> 
> shade's caching and rate-limiting layer needs to be shifted to be able to 
> apply to both levels, and the special caching for servers, ports and
> floating-ips needs to be replaced with the general system. For us to do that 
> though, the general system needs to be improved to handle nodepool's batched 
> rate-limited use case as well.
> 
> We need to remove the guts of both shade and os-client-config in their repos 
> and turn them into backwards compatibility shims.
> 
> We need to work with the python-openstackclient team to finish getting the 
> current sdk usage updated to the non-Profile-based flow, and to make sure 
> we're providing what they need to start replacing uses of python-*client with 
> uses of sdk.
> 
> I know the folks with the shade team background are going to LOVE this one, 
> but we need to migrate existing sdk tests that mock sdk objects to 
> requests-mock. (We also missed a few shade tests that still mock out methods 
> on OpenStackCloud that need to get transitioned)
> 
> Finally - we need to get a 1.0 out this cycle. We're very close - the main 
> sticking point now is the shade/os-client-config layer, and specifically 
> cleaning up a few pieces of shade's API that weren't great but which we 
> couldn't change due to API contracts.
> 
> I'm sure there will be more things to do too. There always are.
> 
> In any case, I'd love to keep helping to pushing these rocks uphill.
> 
> Thanks!
> Monty
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev