Re: [openstack-dev] [freezer][tc] removing freezer from governance

2018-08-03 Thread Rong Zhu
Hi, all

I think backup restore and disaster recovery is one the import things in
OpenStack, And our
company(ZTE) has already integrated freezer in our production. And did some
features base on
freezer, we could push those features to community. Could you give us a
chance to take over
freezer in Stein cycle, If things still no progress, we cloud do this
action after Stein cycle.

Thank you for your consideration.

-- 
Thanks,
Rong Zhu

On Sat, Aug 4, 2018 at 3:16 AM Doug Hellmann  wrote:

> Based on the fact that the Freezer team missed the Rocky release and
> Stein PTL elections, I have proposed a patch to remove the project from
> governance. If the project is still being actively maintained and
> someone wants to take over leadership, please let us know here in this
> thread or on the patch.
>
> Doug
>
> https://review.openstack.org/#/c/588645/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Port mirroring for SR-IOV VF to VF mirroring

2018-08-03 Thread Zhao, Forrest
Hi Miguel,

Can we put the proposed topics to this PTG etherpad directly?  Or we should 
first discuss it in weekly Neutron project meeting?

Please advise; then we’ll follow the process to propose the PTG topics.

Thanks,
Forrest

From: Miguel Lavalle [mailto:mig...@mlavalle.com]
Sent: Friday, August 3, 2018 11:41 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [neutron] Port mirroring for SR-IOV VF to VF 
mirroring

Forrest, Manjeet,

Here you go: https://etherpad.openstack.org/p/neutron-stein-ptg

Best regards

On Wed, Aug 1, 2018 at 11:49 AM, Bhatia, Manjeet S 
mailto:manjeet.s.bha...@intel.com>> wrote:
Hi,

Yes, we need to refine spec for sure, once a consensus is reached focus will be 
on implementation,
Here’s implementation patch (WIP) https://review.openstack.org/#/c/584892/ , we 
can’t really
review api part until spec if finalized but, other stuff like config and common 
issues can
still be pointed out and progress can be made until consensus on api is 
reached. Miguel, I think
this will be added to etherpad for PTG discussions as well ?

Thanks and Regards !
Manjeet




From: Miguel Lavalle [mailto:mig...@mlavalle.com]
Sent: Tuesday, July 31, 2018 10:26 AM
To: Zhao, Forrest mailto:forrest.z...@intel.com>>
Cc: OpenStack Development Mailing List 
mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [neutron] Port mirroring for SR-IOV VF to VF 
mirroring

Hi Forrest,

Yes, in my email, I was precisely referring to the work around 
https://review.openstack.org/#/c/574477.
 Now that we are wrapping up Rocky, I wanted to raise the visibility of this 
spec. I am glad you noticed. This week we are going to cut our RC-1 and I don't 
anticipate that we will will have a RC-2 for Rocky. So starting next week, 
let's go back to the spec and refine it, so we can start implementing in Stein 
as soon as possible. Depending on how much progress we make in the spec, we may 
need to schedule a discussion during the PTG in Denver, September 10 - 14, in 
case face to face time is needed to reach an agreement. I know that Manjeet is 
going to attend the PTG and he has already talked to me about this spec in the 
recent past. So maybe Manjeet could be the conduit to represent this spec in 
Denver, in case we need to talk about it there

Best regards

Miguel

On Tue, Jul 31, 2018 at 4:12 AM, Zhao, Forrest 
mailto:forrest.z...@intel.com>> wrote:
Hi Miguel,

In your mail “PTL candidacy for the Stein cycle”, it mentioned that “port 
mirroring for SR-IOV VF to VF mirroring” is within Stein goal.

Could you tell where is the place to discuss the design for this feature? 
Mailing list, IRC channel, weekly meeting or others?

I was involved in its spec review at https://review.openstack.org/#/c/574477/; 
but it has not been updated for a while.

Thanks,
Forrest


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] New AUC Criteria

2018-08-03 Thread Amy Marrich
*Are you an Active User Contributor (AUC)? Well you may be and not even
know it! Historically, AUCs met the following criteria: - Organizers of
Official OpenStack User Groups: from the Groups Portal- Active members and
contributors to functional teams and/or working groups (currently also
manually calculated for WGs not using IRC): from IRC logs- Moderators of
any of the operators official meet-up sessions: Currently manually
calculated.- Contributors to any repository under the UC governance: from
Gerrit- Track chairs for OpenStack summits: from the Track Chair tool-
Contributors to Superuser (articles, interviews, user stories, etc.): from
the Superuser backend- Active moderators on ask.openstack.org
: from Ask OpenStackIn July, the User Committee
(UC) voted to add the following criteria to becoming an AUC in order to
meet the needs of the evolving OpenStack Community. So in addition to the
above ways, you can now earn AUC status by meeting the following: - User
survey participants who completed a deployment survey- Ops midcycle session
moderators- OpenStack Days organizers- SIG Members nominated by SIG
leaders- Active Women of OpenStack participants- Active Diversity WG
participantsWell that’s great you have met the requirements to become an
AUC but what does that mean? AUCs can run for open UC positions and can
vote in the elections. AUCs also receive a discounted $300 ticket for
OpenStack Summit as well as having the coveted AUC insignia on your badge!*
And remember nominations for the User Committee open on Monday, August 6
and end on August, 17 with voting August 20 to August 24.

Amy Marrich (spotz)
User Committee
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] supported OS images and magnum spawn failures for Swarm and Kubernetes

2018-08-03 Thread Bogdan Katynski

> On 3 Aug 2018, at 13:46, Tobias Urdin  wrote:
> 
> Kubernetes:
> * Master etcd does not start because /run/etcd does not exist

This could be an issue with etcd rpm. With Systemd, /run is an in-memory tmpfs 
and is wiped on reboots.

We’ve come across a similar issue in mariadb rpm on CentOS 7: 
https://bugzilla.redhat.com/show_bug.cgi?id=1538066

If the etcd rpm only creates /run/etcd during installation, that directory will 
not survive reboots. The rpm should also drop a file in 
/usr/lib/tmpfiles.d/etcd.conf with contents similar to

d /run/etcd 0755 etcd etcd - -


--
Bogdan Katyński
freenode: bodgix







signature.asc
Description: Message signed with OpenPGP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][ptg] Stein PTG planning and Rocky retrospective etherpads

2018-08-03 Thread melanie witt

Howdy folks,

I think I forgot to send an email to alert everyone that we have a 
planning etherpad [1] for the Stein PTG where we're collecting topics of 
interest for discussion at the PTG.


Please add your topics and include your nick with your topics and 
comments so we know who to talk to about the topics.


In usual style, we also have a Rocky retrospective etherpad [2] where we 
can fill in "what went well" and "what went not so well" to discuss at 
the PTG and see if we've made improvements in areas of concern from last 
time and gather concrete actions we can take to improve going forward 
for things we are not doing as well as we could.


Cheers,
-melanie

[1] https://etherpad.openstack.org/p/nova-ptg-stein
[2] https://etherpad.openstack.org/p/nova-rocky-retrospective

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [searchlight][tc] removing searchlight from governance

2018-08-03 Thread Doug Hellmann
Based on the fact that the Searchlight team missed the Rocky release
and Stein PTL elections, I have proposed a patch to remove the
project from governance. If the project is still being actively
maintained and someone wants to take over leadership, please let
us know here in this thread or on the patch.

Doug

https://review.openstack.org/#/c/588644/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [freezer][tc] removing freezer from governance

2018-08-03 Thread Doug Hellmann
Based on the fact that the Freezer team missed the Rocky release and
Stein PTL elections, I have proposed a patch to remove the project from
governance. If the project is still being actively maintained and
someone wants to take over leadership, please let us know here in this
thread or on the patch.

Doug

https://review.openstack.org/#/c/588645/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Proposing Zane Bitter as oslo.service core

2018-08-03 Thread Davanum Srinivas
+1 from me!
On Fri, Aug 3, 2018 at 12:58 PM Ben Nemec  wrote:
>
> Hi,
>
> Zane has been doing some good work in oslo.service recently and I would
> like to add him to the core team.  I know he's got a lot on his plate
> already, but he has taken the time to propose and review patches in
> oslo.service and has demonstrated an understanding of the code.
>
> Please respond with +1 or any concerns you may have.  Thanks.
>
> -Ben
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Proposing Zane Bitter as oslo.service core

2018-08-03 Thread Ken Giusti
+1!

On Fri, Aug 3, 2018 at 12:58 PM, Ben Nemec  wrote:
> Hi,
>
> Zane has been doing some good work in oslo.service recently and I would like
> to add him to the core team.  I know he's got a lot on his plate already,
> but he has taken the time to propose and review patches in oslo.service and
> has demonstrated an understanding of the code.
>
> Please respond with +1 or any concerns you may have.  Thanks.
>
> -Ben
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Proposing Zane Bitter as oslo.service core

2018-08-03 Thread Jay S Bryant



On 8/3/2018 11:58 AM, Ben Nemec wrote:

Hi,

Zane has been doing some good work in oslo.service recently and I 
would like to add him to the core team.  I know he's got a lot on his 
plate already, but he has taken the time to propose and review patches 
in oslo.service and has demonstrated an understanding of the code.


Please respond with +1 or any concerns you may have.  Thanks.

-Ben


Not an Oslo Core but wanted to share my +1.  :-)
__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Proposing Zane Bitter as oslo.service core

2018-08-03 Thread Doug Hellmann
Excerpts from Ben Nemec's message of 2018-08-03 11:58:29 -0500:
> Hi,
> 
> Zane has been doing some good work in oslo.service recently and I would 
> like to add him to the core team.  I know he's got a lot on his plate 
> already, but he has taken the time to propose and review patches in 
> oslo.service and has demonstrated an understanding of the code.
> 
> Please respond with +1 or any concerns you may have.  Thanks.
> 
> -Ben
> 

+1, and thanks, Zane!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo] Proposing Zane Bitter as oslo.service core

2018-08-03 Thread John Dennis

On 08/03/2018 12:58 PM, Ben Nemec wrote:

Hi,

Zane has been doing some good work in oslo.service recently and I would 
like to add him to the core team.  I know he's got a lot on his plate 
already, but he has taken the time to propose and review patches in 
oslo.service and has demonstrated an understanding of the code.


Please respond with +1 or any concerns you may have.  Thanks.


+1


--
John Dennis

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [oslo] Proposing Zane Bitter as oslo.service core

2018-08-03 Thread Ben Nemec

Hi,

Zane has been doing some good work in oslo.service recently and I would 
like to add him to the core team.  I know he's got a lot on his plate 
already, but he has taken the time to propose and review patches in 
oslo.service and has demonstrated an understanding of the code.


Please respond with +1 or any concerns you may have.  Thanks.

-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [releease][ptl] Missing and forced releases

2018-08-03 Thread Sean McGinnis
Today the release team reviewed the rocky deliverables and their releases done
so far this cycle. There are a few areas of concern right now.

Unreleased cycle-with-intermediary
==
There is a much longer list than we would like to see of
cycle-with-intermediary deliverables that have not done any releases so far in
Rocky. These deliverables should not wait until the very end of the cycle to
release so that pending changes can be made available earlier and there are no
last minute surprises.

For owners of cycle-with-intermediary deliverables, please take a look at what
you have merged that has not been released and consider doing a release ASAP.
We are not far from the final deadline for these projects, but it would still
be good to do a release ahead of that to be safe.

Deliverables that miss the final deadline will be at risk of being dropped from
the Rocky coordinated release.

Unrelease client libraries
==
The following client libraries have not done a release:

python-cloudkittyclient
python-designateclient
python-karborclient
python-magnumclient
python-searchlightclient*
python-senlinclient
python-tricircleclient

The deadline for client library releases was last Thursday, July 26. This
coming Monday the release team will force a release on HEAD for these clients.

* python-searchlight client is currently planned on being dropped due to
  searchlight itself not having met the minimum of two milestone releases
  during the rocky cycle.

Missing milestone 3
===
The following projects missed tagging a milestone 3 release:

cinder
designate
freezer
mistral
searchlight

Following policy, a milestone 3 tag will be forced on HEAD for these
deliverables on Monday.

Freezer and searchlight missed previous milestone deadlines and will be dropped
from the Rocky coordinated release.

If there are any questions or concerns, please respond here or get ahold of
someone from the release management team in the #openstack-release channel.

--
Sean McGinnis (smcginnis)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] Release countdown for week R-3, August 6-10

2018-08-03 Thread Sean McGinnis
On Fri, Aug 03, 2018 at 11:23:56AM -0500, Sean McGinnis wrote:
> -
> 

More information on deadlines since we appear to have some conflicting
information documented. According to the published release schedule:

https://releases.openstack.org/rocky/schedule.html#r-finalrc

we stated intermediary releases had to be done by the final RC date. So based
on that, cycle-with-intermediary projects have until August 20 to do their
final release.

Of course, doing before that deadline is highly encouraged to make sure there
are not any last minute problems to work through, if at all possible.

> 
> Upcoming Deadlines & Dates
> --
> 
> RC1 deadline: August 9
cycle-with-intermediary deadline: August 20

> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] Release countdown for week R-3, August 6-10

2018-08-03 Thread Sean McGinnis
Development Focus
-

The Release Candidate (RC) deadline is this Thursday, the 9th. Work should be
focused on any release-critical bugs and wrapping up and remaining feature
work.

General Information
---

All cycle-with-milestones and cycle-with-intermediary projects should cut their
stable/rocky branch by the end of the week. This branch will track the Rocky
release.

Once stable/rocky has been created, master will will be ready to switch to
Stein development. While master will no longer be frozen, please prioritize any
work necessary for completing Rocky plans. Please also keep in mind there will
be rocky patches competing with any new Stein work to make it through the gate.

Changes can be merged into stable/rocky as needed if deemed necessary for an
RC2. Once Rocky is released, stable/rocky will also be ready for any stable
point releases. Whether fixing something for another RC, or in preparation of a
future stable release, fixes must be merged to master first, then backported to
stable/rocky.

Actions
---

cycle-with-milestones deliverables should post an RC1 to openstack/releases
using the version format X.Y.Z.0rc1 along with branch creation from this point.
The deliverable changes should look something like:

  releases:
- projects:
- hash: 90f3ed251084952b43b89a172895a005182e6970
  repo: openstack/example
  version: 1.0.0.0rc1
branches:
  - name: stable/rocky
location: 1.0.0.0rc1

Other cycle deliverables (not cycle-with-milestones) will look the same, but
with your normal versioning.

And another reminder, please add what highlights you want for your project team
in the cycle highlights:

http://lists.openstack.org/pipermail/openstack-dev/2017-December/125613.html


Upcoming Deadlines & Dates
--

RC1 deadline: August 9
Stein PTG: September 10-14

-- 
Sean McGinnis (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-helm] [vote] Core Reviewer nomination for Chris Wedgwood

2018-08-03 Thread Richard Wellum
+1

On Fri, Aug 3, 2018 at 11:39 AM Steve Wilkerson 
wrote:

> +1
>
> On Fri, Aug 3, 2018 at 10:05 AM, MCEUEN, MATT  wrote:
>
>> OpenStack-Helm core reviewer team,
>>
>> I would like to nominate Chris Wedgwood as core review for the
>> OpenStack-Helm.
>>
>> Chris is one of the most prolific reviewers in the OSH community, but
>> more importantly is a very thorough and helpful reviewer.  Many of my most
>> insightful reviews are thanks to him, and I know the same is true for many
>> other team members.  In addition, he is an accomplished OSH engineer and
>> has contributed features that run the gamut, including Ceph integration,
>> Calico support, Neutron configuration, Gating, and core Helm-Toolkit
>> functionality.
>>
>> Please consider this email my +1 vote.
>>
>> A +1 vote indicates that you are in favor of his core reviewer candidacy,
>> and a -1 is a veto.  Voting will be open for the next seven days (closing
>> 8/10) or until all OpenStack-Helm core reviewers cast their vote.
>>
>> Thank you,
>> Matt McEuen
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [sig][upgrades][ansible][charms][tripleo][kolla][airship] reboot or poweroff?

2018-08-03 Thread Adam Spiers

[Adding openstack-sigs list too; apologies for the extreme
cross-posting, but I think in this case the discussion deserves wide
visibility.  Happy to be corrected if there's a better way to handle
this.]

Hi James,

James Page  wrote:

Hi All

tl;dr we (the original founders) have not managed to invest the time to get
the Upgrades SIG booted - time to hit reboot or time to poweroff?


TL;DR response: reboot, absolutely no question!  My full response is
below.


Since Vancouver, two of the original SIG chairs have stepped down leaving
me in the hot seat with minimal participation from either deployment
projects or operators in the IRC meetings.  In addition I've only been able
to make every 3rd IRC meeting, so they have generally not being happening.

I think the current timing is not good for a lot of folk so finding a
better slot is probably a must-have if the SIG is going to continue - and
maybe moving to a monthly or bi-weekly schedule rather than the weekly slot
we have now.

In addition I need some willing folk to help with leadership in the SIG.
If you have an interest and would like to help please let me know!

I'd also like to better engage with all deployment projects - upgrades is
something that deployment tools should be looking to encapsulate as
features, so it would be good to get deployment projects engaged in the SIG
with nominated representatives.

Based on the attendance in upgrades sessions in Vancouver and
developer/operator appetite to discuss all things upgrade at said sessions
I'm assuming that there is still interest in having a SIG for Upgrades but
I may be wrong!

Thoughts?


As a SIG leader in a similar position (albeit with one other very
helpful person on board), let me throw my £0.02 in ...

With both upgrades and self-healing I think there is a big disparity
between supply (developers with time to work on the functionality) and
demand (operators who need the functionality).  And perhaps also the
high demand leads to a lot of developers being interested in the topic
whilst not having much spare time to help out.  That is probably why
we both see high attendance at the summit / PTG events but relatively
little activity in between.

I also freely admit that the inevitable conflicts with downstream
requirements mean that I have struggled to find time to be as
proactive with driving momentum as I had wanted, although I'm hoping
to pick this up again over the next weeks leading up to the PTG.  It
sounds like maybe you have encountered similar challenges.

That said, I strongly believe that both of these SIGs offer a *lot* of
value, and even if we aren't yet seeing the level of online activity
that we would like, I think it's really important that they both
continue.  If for no other reasons, the offline sessions at the
summits and PTGs are hugely useful for helping converge the community
on common approaches, and the associated repositories / wikis serve as
a great focal point too.

Regarding online collaboration, yes, building momentum for IRC
meetings is tough, especially with the timezone challenges.  Maybe a
monthly cadence is a reasonable starting point, or twice a month in
alternating timezones - but maybe with both meetings within ~24 hours
of each other, to reduce accidental creation of geographic silos.

Another possibility would be to offer "open clinic" office hours, like
the TC and other projects have done.  If the TC or anyone else has
established best practices in this space, it'd be great to hear them.

Either way, I sincerely hope that you decide to continue with the SIG,
and that other people step up to help out.  These things don't develop
overnight but it is a tremendously worthwhile initiative; after all,
everyone needs to upgrade OpenStack.  Keep the faith! ;-)

Cheers,
Adam

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Port mirroring for SR-IOV VF to VF mirroring

2018-08-03 Thread Miguel Lavalle
Forrest, Manjeet,

Here you go: https://etherpad.openstack.org/p/neutron-stein-ptg

Best regards

On Wed, Aug 1, 2018 at 11:49 AM, Bhatia, Manjeet S <
manjeet.s.bha...@intel.com> wrote:

> Hi,
>
>
>
> Yes, we need to refine spec for sure, once a consensus is reached focus
> will be on implementation,
>
> Here’s implementation patch (WIP) https://review.openstack.org/#/c/584892/
> , we can’t really
>
> review api part until spec if finalized but, other stuff like config and
> common issues can
>
> still be pointed out and progress can be made until consensus on api is
> reached. Miguel, I think
>
> this will be added to etherpad for PTG discussions as well ?
>
>
>
> Thanks and Regards !
>
> Manjeet
>
>
>
>
>
>
>
>
>
> *From:* Miguel Lavalle [mailto:mig...@mlavalle.com]
> *Sent:* Tuesday, July 31, 2018 10:26 AM
> *To:* Zhao, Forrest 
> *Cc:* OpenStack Development Mailing List  openstack.org>
> *Subject:* Re: [openstack-dev] [neutron] Port mirroring for SR-IOV VF to
> VF mirroring
>
>
>
> Hi Forrest,
>
>
>
> Yes, in my email, I was precisely referring to the work around
> https://review.openstack.org/#/c/574477. Now that we are wrapping up
> Rocky, I wanted to raise the visibility of this spec. I am glad you
> noticed. This week we are going to cut our RC-1 and I don't anticipate that
> we will will have a RC-2 for Rocky. So starting next week, let's go back to
> the spec and refine it, so we can start implementing in Stein as soon as
> possible. Depending on how much progress we make in the spec, we may need
> to schedule a discussion during the PTG in Denver, September 10 - 14, in
> case face to face time is needed to reach an agreement. I know that Manjeet
> is going to attend the PTG and he has already talked to me about this spec
> in the recent past. So maybe Manjeet could be the conduit to represent this
> spec in Denver, in case we need to talk about it there
>
>
>
> Best regards
>
>
>
> Miguel
>
>
>
> On Tue, Jul 31, 2018 at 4:12 AM, Zhao, Forrest 
> wrote:
>
> Hi Miguel,
>
>
>
> In your mail “PTL candidacy for the Stein cycle”, it mentioned that “port
> mirroring for SR-IOV VF to VF mirroring” is within Stein goal.
>
>
>
> Could you tell where is the place to discuss the design for this feature?
> Mailing list, IRC channel, weekly meeting or others?
>
>
>
> I was involved in its spec review at https://review.openstack.org/#
> /c/574477/; but it has not been updated for a while.
>
>
>
> Thanks,
>
> Forrest
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Stein PTG etherpad

2018-08-03 Thread Miguel Lavalle
Dear Stackers,

I have started an etherpad to collect topic proposals to be discussed
during the PTG in Denver, September 10th - 14th:
https://etherpad.openstack.org/p/neutron-stein-ptg . Please feel free to
add your proposals under the "Proposed topics to be scheduled" section.
Please also sign in under the "Attendance at the PTG" if you plan to be in
Denver, indicating the days you will be there.

I am looking forward to see many of you in Denver and have a very
productive PTG!

Best regards

Miguel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-helm] [vote] Core Reviewer nomination for Chris Wedgwood

2018-08-03 Thread Steve Wilkerson
+1

On Fri, Aug 3, 2018 at 10:05 AM, MCEUEN, MATT  wrote:

> OpenStack-Helm core reviewer team,
>
> I would like to nominate Chris Wedgwood as core review for the
> OpenStack-Helm.
>
> Chris is one of the most prolific reviewers in the OSH community, but more
> importantly is a very thorough and helpful reviewer.  Many of my most
> insightful reviews are thanks to him, and I know the same is true for many
> other team members.  In addition, he is an accomplished OSH engineer and
> has contributed features that run the gamut, including Ceph integration,
> Calico support, Neutron configuration, Gating, and core Helm-Toolkit
> functionality.
>
> Please consider this email my +1 vote.
>
> A +1 vote indicates that you are in favor of his core reviewer candidacy,
> and a -1 is a veto.  Voting will be open for the next seven days (closing
> 8/10) or until all OpenStack-Helm core reviewers cast their vote.
>
> Thank you,
> Matt McEuen
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to debug no valid host failures with placement

2018-08-03 Thread Eric Fried
> I'm of two minds here.
> 
> On the one hand, you have the case where the end user has accidentally
> requested some combination of things that isn't normally available, and
> they need to be able to ask the provider what they did wrong.  I agree
> that this case is not really an exception, those resources were never
> available in the first place.
> 
> On the other hand, suppose the customer issues a valid request and it
> works, and then issues the same request again and it fails, leading to a
> violation of that customers SLA.  In this case I would suggest that it
> could be considered an exception since the system is not delivering the
> service that it was intended to deliver.

While the case can be made for this being an exception from *nova* (I'm
not getting into that), it is not an exception from the point of view of
*placement*. You asked a service "list the ways I can do X". The first
time, there were three ways. The second time, zero.

It would be like saying:

 # This is the "placement" part
 results = [x for x in l if ]

 # It is up to the placement *consumer* (e.g. nova) to do this, or not
 if len(results) == 0:
 raise Something()

The hard point, which I'm not disputing, is that the end user needs a
way to understand *why* len(results) == 0.

efried
.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-helm] [vote] Core Reviewer nomination for Chris Wedgwood

2018-08-03 Thread MCEUEN, MATT
OpenStack-Helm core reviewer team,

I would like to nominate Chris Wedgwood as core review for the OpenStack-Helm.

Chris is one of the most prolific reviewers in the OSH community, but more 
importantly is a very thorough and helpful reviewer.  Many of my most 
insightful reviews are thanks to him, and I know the same is true for many 
other team members.  In addition, he is an accomplished OSH engineer and has 
contributed features that run the gamut, including Ceph integration, Calico 
support, Neutron configuration, Gating, and core Helm-Toolkit functionality.

Please consider this email my +1 vote.

A +1 vote indicates that you are in favor of his core reviewer candidacy, and a 
-1 is a veto.  Voting will be open for the next seven days (closing 8/10) or 
until all OpenStack-Helm core reviewers cast their vote.

Thank you,
Matt McEuen

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle] Tricircle or Trio2o

2018-08-03 Thread Andrea Franceschini
Hello  Ling,

thank you for answering, I'm glad to see that Trio2o project will be
revived in the near future.

Meanwhile it would be nice to know what approach people use to deploy
multi-site openstack.

I mean, I've read somewhere about solutions using something like a
multi-site heat, but I failed to dig into this as I couldn't find any
resource.

Thanks,

Andrea

Il giorno gio 2 ago 2018 alle ore 05:01 linghucongsong
 ha scritto:
>
> HI  Andrea !
> Yes, just as you said.The tricircle is now only work for network.Because the 
> trio2o do not
> as the openstack official project. so it is a long time nobody contribute to 
> it.
> But recently In the next openstack stein circle. we have plan to make 
> tricircle and
> trio2o work together in the tricircle stein plan. see below link:
> https://etherpad.openstack.org/p/tricircle-stein-plan
> After this fininsh we can play tricircle and tri2o2 together and make 
> multisite openstack
> solutions more effictive.
>
>
>
>
>
> At 2018-08-02 00:55:30, "Andrea Franceschini" 
>  wrote:
> >Hello All,
> >
> >While I was looking for multisite openstack solutions I stumbled on
> >Tricircle project which seemed fairly perfect for the job except that
> >l it was split in two parts, tricircle itself for the network part and
> >Trio2o for all the rest.
> >
> >Now it seems that the Trio2o project is no longer maintained  and I'm
> >wondering what other options exist for multisite openstack, stated
> >that tricircle seems more NFV oriented.
> >
> >Actually a heat multisite solution would work too, but I cannot find
> >any  reference to this kind of solutions.
> >
> >Do you have any idea/advice?
> >
> >Thanks,
> >
> >Andrea
> >
> >__
> >OpenStack Development Mailing List (not for usage questions)
> >Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] easily identifying how services are configured

2018-08-03 Thread Alex Schultz
On Thu, Aug 2, 2018 at 11:32 PM, Cédric Jeanneret  wrote:
>
>
> On 08/02/2018 11:41 PM, Steve Baker wrote:
>>
>>
>> On 02/08/18 13:03, Alex Schultz wrote:
>>> On Mon, Jul 9, 2018 at 6:28 AM, Bogdan Dobrelya 
>>> wrote:
 On 7/6/18 7:02 PM, Ben Nemec wrote:
>
>
> On 07/05/2018 01:23 PM, Dan Prince wrote:
>> On Thu, 2018-07-05 at 14:13 -0400, James Slagle wrote:
>>>
>>> I would almost rather see us organize the directories by service
>>> name/project instead of implementation.
>>>
>>> Instead of:
>>>
>>> puppet/services/nova-api.yaml
>>> puppet/services/nova-conductor.yaml
>>> docker/services/nova-api.yaml
>>> docker/services/nova-conductor.yaml
>>>
>>> We'd have:
>>>
>>> services/nova/nova-api-puppet.yaml
>>> services/nova/nova-conductor-puppet.yaml
>>> services/nova/nova-api-docker.yaml
>>> services/nova/nova-conductor-docker.yaml
>>>
>>> (or perhaps even another level of directories to indicate
>>> puppet/docker/ansible?)
>>
>> I'd be open to this but doing changes on this scale is a much larger
>> developer and user impact than what I was thinking we would be willing
>> to entertain for the issue that caused me to bring this up (i.e.
>> how to
>> identify services which get configured by Ansible).
>>
>> Its also worth noting that many projects keep these sorts of things in
>> different repos too. Like Kolla fully separates kolla-ansible and
>> kolla-kubernetes as they are quite divergent. We have been able to
>> preserve some of our common service architectures but as things move
>> towards kubernetes we may which to change things structurally a bit
>> too.
>
> True, but the current directory layout was from back when we
> intended to
> support multiple deployment tools in parallel (originally
> tripleo-image-elements and puppet).  Since I think it has become
> clear that
> it's impractical to maintain two different technologies to do
> essentially
> the same thing I'm not sure there's a need for it now.  It's also worth
> noting that kolla-kubernetes basically died because there wasn't enough
> people to maintain both deployment methods, so we're not the only
> ones who
> have found that to be true.  If/when we move to kubernetes I would
> anticipate it going like the initial containers work did -
> development for a
> couple of cycles, then a switch to the new thing and deprecation of
> the old
> thing, then removal of support for the old thing.
>
> That being said, because of the fact that the service yamls are
> essentially an API for TripleO because they're referenced in user

 this ^^

> resource registries, I'm not sure it's worth the churn to move
> everything
> either.  I think that's going to be an issue either way though, it's
> just a
> question of the scope.  _Something_ is going to move around no
> matter how we
> reorganize so it's a problem that needs to be addressed anyway.

 [tl;dr] I can foresee reorganizing that API becomes a nightmare for
 maintainers doing backports for queens (and the LTS downstream
 release based
 on it). Now imagine kubernetes support comes within those next a few
 years,
 before we can let the old API just go...

 I have an example [0] to share all that pain brought by a simple move of
 'API defaults' from environments/services-docker to
 environments/services
 plus environments/services-baremetal. Each time a file changes
 contents by
 its old location, like here [1], I had to run a lot of sanity checks to
 rebase it properly. Like checking for the updated paths in resource
 registries are still valid or had to/been moved as well, then picking
 the
 source of truth for diverged old vs changes locations - all that to
 loose
 nothing important in progress.

 So I'd say please let's do *not* change services' paths/namespaces in
 t-h-t
 "API" w/o real need to do that, when there is no more alternatives
 left to
 that.

>>> Ok so it's time to dig this thread back up. I'm currently looking at
>>> the chrony support which will require a new service[0][1]. Rather than
>>> add it under puppet, we'll likely want to leverage ansible. So I guess
>>> the question is where do we put services going forward?  Additionally
>>> as we look to truly removing the baremetal deployment options and
>>> puppet service deployment, it seems like we need to consolidate under
>>> a single structure.  Given that we don't want force too much churn,
>>> does this mean that we should align to the docker/services/*.yaml
>>> structure or should we be proposing a new structure that we can try to
>>> align on.
>>>
>>> There is outstanding tech-debt around the nested stacks and references
>>> within these services when we 

Re: [openstack-dev] [stestr?][tox?][infra?] Unexpected success isn't a failure

2018-08-03 Thread Matthew Treinish
On Tue, Jul 10, 2018 at 03:16:14PM -0400, Matthew Treinish wrote:
> On Tue, Jul 10, 2018 at 10:16:37AM +0100, Chris Dent wrote:
> > On Mon, 9 Jul 2018, Matthew Treinish wrote:
> > 
> > > It's definitely  a bug, and likely a bug in stestr (or one of the lower 
> > > level
> > > packages like testtools or python-subunit), because that's what's 
> > > generating
> > > the return code. Tox just looks at the return code from the commands to 
> > > figure
> > > out if things were successful or not. I'm a bit surprised by this though I
> > > thought we covered the unxsuccess and xfail cases because I would have 
> > > expected
> > > cdent to file a bug if it didn't. Looking at the stestr tests we don't 
> > > have
> > > coverage for the unxsuccess case so I can see how this slipped through.
> > 
> > This was reported on testrepository some years ago and a bit of
> > analysis was done: https://bugs.launchpad.net/testrepository/+bug/1429196
> > 
> 
> This actually helps a lot, because I was seeing the same issue when I tried
> writing a quick patch to address this. When I manually poked the TestResult
> object it didn't have anything in the unxsuccess list. So instead of relying
> on that I wrote this patch:
> 
> https://github.com/mtreinish/stestr/pull/188
> 
> which uses the output filter's internal function for counting results to
> find unxsuccess tests. It's still not perfect though because if someone
> runs with the --no-subunit-trace flag it still doesn't work (because that
> call path never gets run) but it's at least a starting point. I've
> marked it as WIP for now, but I'm thinking we could merge it as is and
> leave the --no-subunit-trace and unxsuccess as a known issues for now,
> since xfail and unxsuccess are pretty uncommon in practice. (gabbi is the
> only thing I've seen really use it)
> 
> 
> 
> > So yeah, I did file a bug but it fell off the radar during those
> > dark times.
> > 
> 

Just following up here, after digging some more and getting a detailed
bug filed by electrofelix [1] I was able to throw together a different patch
that should solve this in a better way:

https://github.com/mtreinish/stestr/pull/190

Once that lands I can push a bugfix release to get it out there so people
can actually use the fix.

-Matt Treinish


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] How to debug no valid host failures with placement

2018-08-03 Thread Chris Friesen

On 08/02/2018 06:27 PM, Jay Pipes wrote:

On 08/02/2018 06:18 PM, Michael Glasgow wrote:



More generally, any time a service fails to deliver a resource which it is
primarily designed to deliver, it seems to me at this stage that should
probably be taken a bit more seriously than just "check the log file, maybe
there's something in there?"  From the user's perspective, if nova fails to
produce an instance, or cinder fails to produce a volume, or neutron fails to
build a subnet, that's kind of a big deal, right?

In such cases, would it be possible to generate a detailed exception object
which contains all the necessary info to ascertain why that specific failure
occurred?


It's not an exception. It's normal course of events. NoValidHosts means there
were no compute nodes that met the requested resource amounts.


I'm of two minds here.

On the one hand, you have the case where the end user has accidentally requested 
some combination of things that isn't normally available, and they need to be 
able to ask the provider what they did wrong.  I agree that this case is not 
really an exception, those resources were never available in the first place.


On the other hand, suppose the customer issues a valid request and it works, and 
then issues the same request again and it fails, leading to a violation of that 
customers SLA.  In this case I would suggest that it could be considered an 
exception since the system is not delivering the service that it was intended to 
deliver.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [placement] placement update 18-31

2018-08-03 Thread Chris Dent


HTML: https://anticdent.org/placement-update-18-31.html

This is placement update 18-31, a weekly update of ongoing development 
related to the [OpenStack](https://www.openstack.org/) [placement 
service](https://developer.openstack.org/api-ref/placement/).


# Most Important

We are a week past feature freeze for the Rocky cycle, so finding
and fixing bugs through testing and watching launchpad remains the
big deal. Progress is also being made on making sure the Reshaper
stack (see below) and using consumer generations in the report
client are ready as soon as Stein opens.

# What's Changed

A fair few bug fixes and refactorings have merged in the past week,
thanks to everyone chipping in. The functional differences you might
see include:

* Writing allocations is retried server side up to ten times.
* Placement functional tests are using some of their own fixtures
  for output, log, and warning capture. This may lead to different
  output when tests fail. We should fix issues as they come up.
* Stats handling in the resource tracker is now per-node, meaning it
  is both more correct and more efficient.
* Resource provider generation conflict handling in the report
  client is much improved.
* When using force_hosts or force_nodes, limit is not used when
  doing GET /allocation_candidates.
* You can no longer use unexpected fields when writing allocations.
* The install guide has been updated to include instructions about
  the placement database.


# Bugs

* Placement related [bugs not yet in progress](https://goo.gl/TgiPXb):
   16, +2 from last week.
* [In progress placement bugs](https://goo.gl/vzGGDQ) 12, -1 on last
   week.

# Main Themes

## Documentation

Now that we are feature frozen we better document all the stuff. And
more than likely we'll find some bugs while doing that documenting.

Matt pointed out in response to last week's pupdate that the two
bullets that had been listed here are no longer valid because we
punted on most of the functionality (fully working shared and nested
providers) that needed the docs.

However, that doesn't mean we're in the clear. A good review of
existing docs is warranted.

## Consumer Generations

These are in place on the placement side. There's pending work on
the client side, and a semantic fix on the server side, but neither
are going to merge this cycle.

* 
   return 404 when no consumer found in allocs

* 
   Use placement 1.28 in scheduler report client
   (1.28 is consumer gens, which we hope to have ready for immediate
   Stein merge)

## Reshape Provider Trees

Work has restarted on framing in the use of the reshaper from the
compute manage. It won't merge for Rocky but we want it ready as
soon as Stein opens.

It's all at: 

## Extraction

A lot of test changes were made to prepare for the extraction of
placement. Most of the remaining "uses of nova" in placement are
things that will need to wait to post-extraction, but it is useful
and informative to look at imports as there are some thing
remaining.

On the [PTG etherpad](https://etherpad.openstack.org/p/nova-ptg-stein)
I've proposed that we consider stopping forward feature progress on
Placement in Stein so that:

* We can given nova some time to catch up and find bugs in existing
  placement features.
* We can do the extraction and large backlog of refactoring work
  that we'd like to do.

That is at a list item of 'What does it take to declare placement
"done"?'

# Other

Going to start this list with the 5 that remains from the 11 (nice
work!) that were listed last week. After that will be anything else
I can find.

* 
Add unit test for non-placement resize

* 
Use placement.inventory.inuse in report client

* 
   Delete allocations when it is re-allocated
   (This is addressing a TODO in the report client)

* 
   Remove Ocata comments which expires now

* 
   Ignore some updates from virt driver

* 

  Neutron work related to minimum bandwidth handling with placement

* 
  Resource provider examples (in osc-placement)

* 
  Get resource provider by uuid or name (in osc-placement)

* 
  Provide a useful message in the case of 500-error (in
  osc-placement)

* 
  Add image link in README.rst (in osc-placement)

* 
  Random names for [osc-placement] functional tests

* 
  Fix nits in resource_provider.py

[openstack-dev] [keystone] Keystone Team Update - Week of 30 July 2018

2018-08-03 Thread Lance Bragstad
# Keystone Team Update - Week of 30 July 2018

## News

This week was relatively quiet, but we're working towards RC1 as our
next deadline.

## Recently Merged Changes

Search query: https://bit.ly/2IACk3F

We merged 20 changes this week.

Mainly changes to continue moving APIs to flask and we landed a huge
token provider API refactor.

## Changes that need Attention

Search query: https://bit.ly/2wv7QLK

There are 43 changes that are passing CI, not in merge conflict, have no
negative reviews and aren't proposed by bots.

Reminder that we're in soft string freeze and past the 3rd milestone so
prioritizing bug fixes is beneficial.

## Bugs

This week we opened 4 new bugs, closed 1, and fixed 3.

The main concern with
fixing https://bugs.launchpad.net/keystone/+bug/1778945 was that it will
impact downstream providers, hence the release note. Otherwise it's
cleaned up a ton of technical debt (I appreciate the reviews here).

## Milestone Outlook

This upcoming week is going to be RC1, which we will plan to cut by
Friday unless critical bugs emerge. We do have a list of bugs to target
to RC, but none of them are blockers. If it comes down to it, they can
likely be pushed to Stein. If you notice anything that comes up as a
release blocker, please let me know.

https://bit.ly/2MeXN0L
https://releases.openstack.org/rocky/schedule.html

## Help with this newsletter

Help contribute to this newsletter by editing the
etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter
Dashboard generated using gerrit-dash-creator
and https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum] supported OS images and magnum spawn failures for Swarm and Kubernetes

2018-08-03 Thread Tobias Urdin

Hello,

I'm testing around with Magnum and have so far only had issues.
I've tried deploying Docker Swarm (on Fedora Atomic 27, Fedora Atomic 
28) and Kubernetes (on Fedora Atomic 27) and haven't been able to get it 
working.


Running Queens, is there any information about supported images? Is 
Magnum maintained to support Fedora Atomic still?
What is in charge of population the certificates inside the instances, 
because this seems to be the root of all issues, I'm not using Barbican 
but the x509keypair driver

is that the reason?

Perhaps I missed some documentation that x509keypair does not support 
what I'm trying to do?


I've seen the following issues:

Docker:
* Master does not start and listen on TCP because of certificate issues
dockerd-current[1909]: Could not load X509 key pair (cert: 
"/etc/docker/server.crt", key: "/etc/docker/server.key")


* Node does not start with:
Dependency failed for Docker Application Container Engine.
docker.service: Job docker.service/start failed with result 'dependency'.

Kubernetes:
* Master etcd does not start because /run/etcd does not exist
** When that is created it fails to start because of certificate
2018-08-03 12:41:16.554257 C | etcdmain: open 
/etc/etcd/certs/server.crt: no such file or directory


* Master kube-apiserver does not start because of certificate
unable to load server certificate: open 
/etc/kubernetes/certs/server.crt: no such file or directory


* Master heat script just sleeps forever waiting for port 8080 to become 
available (kube-apiserver) so it can never kubectl apply the final steps.


* Node does not even start and times out when Heat deploys it, probably 
because master never finishes


Any help is appreciated perhaps I've missed something crucial, I've not 
tested Kubernetes on CoreOS yet.


Best regards
Tobias

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Guests not getting metadata in a Cellsv2 deploy

2018-08-03 Thread Liam Young
fwiw this appears to be due to a bug in nova. I've raised
https://bugs.launchpad.net/nova/+bug/1785235 and proposed a fix
https://review.openstack.org/588520

On Thu, Aug 2, 2018 at 5:47 PM Liam Young  wrote:

> Hi,
>
> I have a fresh pike deployment and the guests are not getting metadata. To
> investigate it further it would really help me to understand what the
> metadata flow is supposed to look like.
>
> In my deployment the guest receives a 404 when hitting
> http://169.254.169.254/latest/meta-data. I have added some logging to
> expose the messages passing via amqp and I see the nova-api-metadata
> service making a call to the super-conductor asking for an InstanceMapping.
> The super-conductor sends a reply detailing which cell the instance is in
> and the urls for both mysql and rabbit. The nova-api-metadata service then
> sends a second message to the superconductor this time asking for
> an Instance obj. The super-conductor fails to find the instance and returns
> a failure with a "InstanceNotFound: Instance  could not be found"
> message, the  nova-api-metadata service then sends a 404 to the original
> requester.
>
> I think the super-conductor is looking in the wrong database for the
> instance information. I believe it is looking in cell0 when it should
> actually be connecting to an entirely different instance of mysql which is
> associated with the cell that the instance is in.
>
> Should the super-conductor even be trying to retrieve the instance
> information or should the nova-api-metadata service actually be messaging
> the conductor in the compute cell?
>
> Any pointers gratefully received!
> Thanks
> Liam
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Clearing out old gerrit reviews

2018-08-03 Thread Adriano Petrich
Same.

On 3 August 2018 at 13:15, Renat Akhmerov  wrote:

> Dougal, the policy looks good for me. I gave it the second +2 but didn’t
> approve yet so that others could also review (e.g. Adriano and Vitalii).
>
> Thanks
>
> Renat Akhmerov
> @Nokia
> On 3 Aug 2018, 16:46 +0700, Dougal Matthews , wrote:
>
> On 9 July 2018 at 16:13, Dougal Matthews  wrote:
>
>> Hey folks,
>>
>> I'd like to propose that we start abandoning old Gerrit reviews.
>>
>> This report shows how stale and out of date some of the reviews are:
>> http://stackalytics.com/report/reviews/mistral-group/open
>>
>> I would like to initially abandon anything without any activity for a
>> year, but we might want to consider a shorter limit - maybe 6 months.
>> Reviews can be restored, so the risk is low.
>>
>> What do you think? Any objections or counter suggestions?
>>
>> If I don't hear any complaints, I'll go ahead with this next week (or
>> maybe the following week).
>>
>
> That time line was ambitious. I didn't get started :-)
>
> However, I did decide it would be best to formalise this plan somewhere.
> So I quickly wrote up the plan in a Mistral policy spec. If we can agree
> there and merge it, then I'll go ahead and start the cleanup.
>
> https://review.openstack.org/#/c/588492/
>
>
>
>>
>> Cheers,
>> Dougal
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Clearing out old gerrit reviews

2018-08-03 Thread Renat Akhmerov
Dougal, the policy looks good for me. I gave it the second +2 but didn’t 
approve yet so that others could also review (e.g. Adriano and Vitalii).

Thanks

Renat Akhmerov
@Nokia
On 3 Aug 2018, 16:46 +0700, Dougal Matthews , wrote:
> > On 9 July 2018 at 16:13, Dougal Matthews  wrote:
> > > Hey folks,
> > >
> > > I'd like to propose that we start abandoning old Gerrit reviews.
> > >
> > > This report shows how stale and out of date some of the reviews are:
> > > http://stackalytics.com/report/reviews/mistral-group/open
> > >
> > > I would like to initially abandon anything without any activity for a 
> > > year, but we might want to consider a shorter limit - maybe 6 months. 
> > > Reviews can be restored, so the risk is low.
> > >
> > > What do you think? Any objections or counter suggestions?
> > >
> > > If I don't hear any complaints, I'll go ahead with this next week (or 
> > > maybe the following week).
> >
> > That time line was ambitious. I didn't get started :-)
> >
> > However, I did decide it would be best to formalise this plan somewhere. So 
> > I quickly wrote up the plan in a Mistral policy spec. If we can agree there 
> > and merge it, then I'll go ahead and start the cleanup.
> >
> > https://review.openstack.org/#/c/588492/
> >
> >
> > >
> > > Cheers,
> > > Dougal
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Removing Inactive Cores

2018-08-03 Thread Renat Akhmerov
Lingxian, and any time welcome back as an active contributor if you wish! :) I 
want to thank you for all contribution and achievements you made for our 
project!

Renat Akhmerov
@Nokia
On 3 Aug 2018, 17:14 +0700, Lingxian Kong , wrote:
> +1 for me, i am still watching mistral :-)
>
> Cheers,
> Lingxian Kong
>
>
> > On Fri, Aug 3, 2018 at 9:58 PM Dougal Matthews  wrote:
> > > Hey,
> > >
> > > As we are approaching the end of Rocky I am doing some house keeping.
> > >
> > > The people below have been removed from the Mistral core team due to 
> > > reviewing inactivity in the last 180 days[1]. I would like to thank them 
> > > for their contributions and they are welcome to re-join the Mistral core 
> > > team if they become active in the future.
> > >
> > > - Lingxian Kong
> > > - Winson Chan
> > >
> > > [1] http://stackalytics.com/report/contribution/mistral-group/180
> > >
> > > Thanks,
> > > Dougal
> > > __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Blazar] PTL non candidacy

2018-08-03 Thread Pierre Riteau
Hi Masahito,

Thank you very much for leading the Blazar project successfully! We
wouldn't have accomplished so much without your dedication.

Pierre

On 31 July 2018 at 11:58, Masahito MUROI  wrote:
> Hi Blazar folks,
>
> I just want to announce that I'm not running the PTL for the Stein cycle. I
> have been running this position from the Ocata cycle when we revived the
> project.  We've been done lots of successful activities in the last 4
> cycles.
>
> I think it's time to change the position to someone else to move the Blazar
> project further forward. I'll still be around the project and try to make
> the Blazar project great.
>
> Thanks for lots of your supports.
>
> best regards,
> Masahito
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposing Lukas Bezdicka core on TripleO

2018-08-03 Thread Sergii Golovatiuk
+1

On Thu, Aug 2, 2018 at 7:45 AM, Marios Andreou  wrote:
> +1 !
>
>
>
> On Wed, Aug 1, 2018 at 2:31 PM, Giulio Fidente  wrote:
>>
>> Hi,
>>
>> I would like to propose Lukas Bezdicka core on TripleO.
>>
>> Lukas did a lot work in our tripleoclient, tripleo-common and
>> tripleo-heat-templates repos to make FFU possible.
>>
>> FFU, which is meant to permit upgrades from Newton to Queens, requires
>> in depth understanding of many TripleO components (for example Heat,
>> Mistral and the TripleO client) but also of specific TripleO features
>> which were added during the course of the three releases (for example
>> config-download and upgrade tasks). I believe his FFU work to have been
>> very challenging.
>>
>> Given his broad understanding, more recently Lukas started helping doing
>> reviews in other areas.
>>
>> I am so sure he'll be a great addition to our group that I am not even
>> looking for comments, just votes :D
>> --
>> Giulio Fidente
>> GPG KEY: 08D733BA
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Best Regards,
Sergii Golovatiuk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Removing Inactive Cores

2018-08-03 Thread Lingxian Kong
+1 for me, i am still watching mistral :-)

Cheers,
Lingxian Kong


On Fri, Aug 3, 2018 at 9:58 PM Dougal Matthews  wrote:

> Hey,
>
> As we are approaching the end of Rocky I am doing some house keeping.
>
> The people below have been removed from the Mistral core team due to
> reviewing inactivity in the last 180 days[1]. I would like to thank them
> for their contributions and they are welcome to re-join the Mistral core
> team if they become active in the future.
>
> - Lingxian Kong
> - Winson Chan
>
> [1] http://stackalytics.com/report/contribution/mistral-group/180
>
> Thanks,
> Dougal
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Removing Inactive Cores

2018-08-03 Thread Dougal Matthews
Hey,

As we are approaching the end of Rocky I am doing some house keeping.

The people below have been removed from the Mistral core team due to
reviewing inactivity in the last 180 days[1]. I would like to thank them
for their contributions and they are welcome to re-join the Mistral core
team if they become active in the future.

- Lingxian Kong
- Winson Chan

[1] http://stackalytics.com/report/contribution/mistral-group/180

Thanks,
Dougal
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Clearing out old gerrit reviews

2018-08-03 Thread Dougal Matthews
On 9 July 2018 at 16:13, Dougal Matthews  wrote:

> Hey folks,
>
> I'd like to propose that we start abandoning old Gerrit reviews.
>
> This report shows how stale and out of date some of the reviews are:
> http://stackalytics.com/report/reviews/mistral-group/open
>
> I would like to initially abandon anything without any activity for a
> year, but we might want to consider a shorter limit - maybe 6 months.
> Reviews can be restored, so the risk is low.
>
> What do you think? Any objections or counter suggestions?
>
> If I don't hear any complaints, I'll go ahead with this next week (or
> maybe the following week).
>

That time line was ambitious. I didn't get started :-)

However, I did decide it would be best to formalise this plan somewhere. So
I quickly wrote up the plan in a Mistral policy spec. If we can agree there
and merge it, then I'll go ahead and start the cleanup.

https://review.openstack.org/#/c/588492/



>
> Cheers,
> Dougal
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [i18n] Edge and Containers whitepapers ready for translation

2018-08-03 Thread Frank Kloeker

Hi Jimmy,

thanks for announcement. Great stuff! It looks really great and it's 
easy to navigate. I think a special thanks goes to Sebastian for 
designing the pages. One small remark: have you tried text-align: 
justify? I think it would be a little bit more readable, like a science 
paper (German word is: Ordnung)
I put the projects again on the frontpage of the translation platform, 
so we'll get more translations shortly.


kind regards

Frank

Am 2018-08-02 21:07, schrieb Jimmy McArthur:

The Edge and Containers translations are now live.  As new
translations become available, we will add them to the page.

https://www.openstack.org/containers/
https://www.openstack.org/edge-computing/

Note that the Chinese translation has not been added to Zanata at this
time, so I've left the PDF download up on that page.

Thanks everyone and please let me know if you have questions or 
concerns!


Cheers!
Jimmy

Jimmy McArthur wrote:

Frank,

We expect to have these papers up this afternoon. I'll update this 
thread when we do.


Thanks!
Jimmy

Frank Kloeker wrote:

Hi Sebastian,

okay, it's translated now. In Edge whitepaper is the problem with 
XML-Parsing of the term AT Don't know how to escape this. Maybe 
you will see the warning during import too.


kind regards

Frank

Am 2018-07-30 20:09, schrieb Sebastian Marcet:

Hi Frank,
i was double checking pot file and realized that original pot missed
some parts of the original paper (subsections of the paper) 
apologizes

on that
i just re uploaded an updated pot file with missing subsections

regards

On Mon, Jul 30, 2018 at 2:20 PM, Frank Kloeker  
wrote:



Hi Jimmy,

from the GUI I'll get this link:


https://translate.openstack.org/rest/file/translation/edge-computing/pot-translation/de/po?docId=cloud-edge-computing-beyond-the-data-center

[1]

paper version  are only in container whitepaper:



https://translate.openstack.org/rest/file/translation/leveraging-containers-openstack/paper/de/po?docId=leveraging-containers-and-openstack

[2]

In general there is no group named papers

kind regards

Frank

Am 2018-07-30 17:06, schrieb Jimmy McArthur:
Frank,

We're getting a 404 when looking for the pot file on the Zanata 
API:



https://translate.openstack.org/rest/file/translation/papers/papers/de/po?docId=edge-computing

[3]

As a result, we can't pull the po files.  Any idea what might be
happening?

Seeing the same thing with both papers...

Thank you,
Jimmy

Frank Kloeker wrote:
Hi Jimmy,

Korean and German version are now done on the new format. Can you
check publishing?

thx

Frank

Am 2018-07-19 16:47, schrieb Jimmy McArthur:
Hi all -

Follow up on the Edge paper specifically:


https://translate.openstack.org/iteration/view/edge-computing/pot-translation/documents?dswid=-3192

[4] This is now available. As I mentioned on IRC this morning, it
should
be VERY close to the PDF.  Probably just needs a quick review.

Let me know if I can assist with anything.

Thank you to i18n team for all of your help!!!

Cheers,
Jimmy

Jimmy McArthur wrote:
Ian raises some great points :) I'll try to address below...

Ian Y. Choi wrote:
Hello,

When I saw overall translation source strings on container
whitepaper, I would infer that new edge computing whitepaper
source strings would include HTML markup tags.
One of the things I discussed with Ian and Frank in Vancouver is
the expense of recreating PDFs with new translations.  It's
prohibitively expensive for the Foundation as it requires design
resources which we just don't have.  As a result, we created the
Containers whitepaper in HTML, so that it could be easily updated
w/o working with outside design contractors.  I indicated that we
would also be moving the Edge paper to HTML so that we could 
prevent

that additional design resource cost.
On the other hand, the source strings of edge computing whitepaper
which I18n team previously translated do not include HTML markup
tags, since the source strings are based on just text format.
The version that Akihiro put together was based on the Edge PDF,
which we unfortunately didn't have the resources to implement in 
the

same format.

I really appreciate Akihiro's work on RST-based support on
publishing translated edge computing whitepapers, since
translators do not have to re-translate all the strings.
I would like to second this. It took a lot of initiative to work on
the RST-based translation.  At the moment, it's just not usable for
the reasons mentioned above.
On the other hand, it seems that I18n team needs to investigate on
translating similar strings of HTML-based edge computing whitepaper
source strings, which would discourage translators.
Can you expand on this? I'm not entirely clear on why the HTML
based translation is more difficult.

That's my point of view on translating edge computing whitepaper.

For translating container whitepaper, I want to further ask the
followings since *I18n-based tools*
would mean for translators that translators can test and 

Re: [openstack-dev] [kolla] ptl non candidacy

2018-08-03 Thread Goutham Pratapa
Hi Jeffrey,

Thank you for your works as a PTL in OpenStack-kolla.

You were always friendly, Helpful and always a ready to approach guy

Thank you for all the help and support.

Thanks
Goutham Pratapa.

On Fri, Aug 3, 2018 at 1:10 PM, Ha Quang, Duong 
wrote:

> Hi Jeffrey,
>
> Thank you for your works as PTL in Rocky cycle and release liaison from
> many cycle ago (at least I joined Kolla community, you are already release
> liaison).
>
> Hope that we still see you around then.
>
> Regards,
> Duong
>
>
> > From: Jeffrey Zhang [mailto:zhang.lei@gmail.com]
> > Sent: Wednesday, July 25, 2018 10:48 AM
> > To: OpenStack Development Mailing List  openstack.org>
> > Subject: [openstack-dev] [kolla] ptl non candidacy
> >
> > Hi all,
> >
> > I just wanna to say I am not running PTL for Stein cycle. I have been
> involved in Kolla project for almost 3 years. And recently my work changes
> a little, too. So > I may not have much time in the community in the
> future. Kolla is a great project and the community is also awesome. I would
> encourage everyone in the > community to consider for running.
>
> > Thanks for your support :D.
> > --
> > Regards,
> > Jeffrey Zhang
> > Blog: http://xcodest.me
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Cheers !!!
Goutham Pratapa
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] ptl non candidacy

2018-08-03 Thread Ha Quang, Duong
Hi Jeffrey,

Thank you for your works as PTL in Rocky cycle and release liaison from many 
cycle ago (at least I joined Kolla community, you are already release liaison).

Hope that we still see you around then.

Regards,
Duong 


> From: Jeffrey Zhang [mailto:zhang.lei@gmail.com] 
> Sent: Wednesday, July 25, 2018 10:48 AM
> To: OpenStack Development Mailing List 
> Subject: [openstack-dev] [kolla] ptl non candidacy
> 
> Hi all,
> 
> I just wanna to say I am not running PTL for Stein cycle. I have been 
> involved in Kolla project for almost 3 years. And recently my work changes a 
> little, too. So > I may not have much time in the community in the future. 
> Kolla is a great project and the community is also awesome. I would encourage 
> everyone in the > community to consider for running. 

> Thanks for your support :D.
> -- 
> Regards,
> Jeffrey Zhang
> Blog: http://xcodest.me
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev