[openstack-dev] [First Contact] SIG PTG Summary

2018-09-19 Thread Kendall Nelson
Hello Everyone!

The first half of the week was particularly busy and I know a lot of people
wanted to come to the First Contact SIG room that could not, or couldn’t be
there for the whole time. So! For those of you interested that maybe
couldn’t make it for the whole time, here is a beautiful summary :)

And the etherpad if you want that too[1].

State of the Union

==

We started off with a state of the union on all of the outreach/new
contributor groups to get everyone involved on the same page so we can
better disseminate the info to new comers and potential contributors.

Outreachy

--

The winter round of applications for mentees opens the 19th and remains
open until October 30th. There are two Outreachy internships sponsored for
this round[2]. Close to the end of this round of internships- they end in
March- the next round will kickoff sponsorships and applications for
projects in February.

Google Summer of Code

-

We applied and were not accepted last year. We would like to urge the
community to apply again. There’s no info on when applications will open to
organisations to apply for interns.

OpenStack Upstream Institute



We continue to offer this before each summit and at a variety of OpenStack
Day and OpenInfra Day events. Over the last year we have held the training
in Vancouver, Seoul, Krakow, Sao Paulo, and Rio de Janiero. In October, the
training will be held at OpenStack Day Nordics in Stockholm[3] and
preceeding the summit in Berlin[4]. A modified version will also be held at
the Open Source Day at the upcoming Grace Hopper Conference in Houston[5].

Cohort Mentoring Program

---

Formerly the Longterm Mentoring Program organized by the Women of
OpenStack, the mentoring program has changed hands and gotten a facelift.
The program is now cohort style- groups of mentees and mentors focusing on
a single goal like passing the COA, or getting your first patch merged-
rather than the 1x1 loosely timeboxed model. Its also now organized and
mediated by the Diversity Working Group. If you are interested in getting
involved, more details are here[6].


Organization Guide

===

Back in Sydney, we started discussing creating a guide for organizations to
educate them on what their contributors need to be effective and successful
members of the community. There is a patch out for review right now[7] that
we want to get merged as soon as possible so that we can publicize it in
Berlin and start introducing it when new companies join the foundation. It
was concluded that we needed to add more rationalizations to the
requirements and we delegated those out to ricolin, jungleboyj, and spotz
to help mattoliverau with content. As soon as this patch gets merged, I
volunteered to work to get it onto the soonest board meeting possible.

Operator Inclusion/ Ops Feedback

==

Unfortunately, many operators were in the Operators room- yeah..they got
scheduled to overlap..oh well. We did have a few representatives join us
though. Basically we concluded that the operator section of the Contributor
Guide is wholly unattractive as it’s a daunting outline of a bunch of
things that aren’t immediately obvious as important to Operators. It needs
to be broken up and the ‘Allows you to’ subsections of each part of the
outline need to be moved up to the top level so that operators can more
easily see and understand why sections of it are important. There are a few
other cosmetic things that also need to be resolved- more details are in
the etherpad from Tuesday’s discussions lines 49-62[1].

The operators are also currently trying to get their docs up and running
again after having been unsupported, partially migrated to wikis, and then
moved back to a repository. Once these are a little more fleshed out and
settled, we will link to them from the Contributor Guide.

I also attended the Operator’s room on Monday and tried to put a call out
for a single point of contact, like a project liaison, so that any
operators we see asking for help or resources can be directed to that point
of contact. No one stepped up during the meeting, but its still something
we see as being important, and will keep pushing to get one or two names to
direct new operators to.

Forum Proposals

=

Submissions are now open! We have an etherpad from the brainstorming
period[8], but basically we want a forum session that will focus on the
Operator section of the Contributor guide and jimmymacarthur volunteered to
write up and submit this.

The only other topic we really discussed around Berlin was a Meet & Greet
sort of room that we would request during the call for BoF’s for the
summit. This call recently went out, and I will put in the request by the
end of the week.

Translation of the Contributor Guide

===

Basically, we want it translated- code & 

[openstack-dev] [heat] We need more help for actions and review. And some PTG update for Heat

2018-09-19 Thread Rico Lin
Dear all

As we reach Stein and start to discuss what we should do in the next cycle,
I would like to raise voice for what kind of help we need and what target
we planning.
BTW, If you really can't start your contact with us with any English (we
don't mind any English skill you are)

*First of all, we need more developers, and reviewer. I would very much
like to give Heat core reviewer title to people if anyone provides fare
quality of reviews. So please help us review patches. Let me know if you
would like to be a part but got no clue on how can you started.*

Second, we need more help to achieve actions. Here I make a list of actions
base on what we discuss from PTG [1]. I mark some of them with (*) if it
looks like an easy contribution:

   - (*) Move interop tempest tests to a separate repo
   - Move python3 functional job to python3.6
   - (*) Implement upgrade check
   - (*) Copy templates from Cue project into the heat-templates repo
   - (*) Add Stein template versions
   - (*) Do document improvement or add documents for:
  - (*) Heat Event Notification list
 - Nice to have our own document and  provide a link to [2]
 - default heat service didn't enable notification, so might be
 mention and link to Notify page
  - (*) Autoscaling doc
  - (*) AutoHealing doc
  - (*) heat agent & heat container agent
  - (*) external resource
  - (*) Upgrade guideline
  - (*) Move document from wiki to in repo document
   - (*) Fix live properties (observe reality) feature and make sure all
   resource works
   - remove any legacy pattern from .zuul.yaml
   - Improve autoscaling and self-healing
   - Create Tempest test for self-healing scenario (around Heat integration)
   - (*) Examine all resource type and help to update if they do not sync
   up with physical resource

If you like to learn more of any above tasks, just reach out to me and
other core members, and we're more than happy to give you the background
and guideline to any of above tasks. Also, welcome to join our meeting and
raise topics for any tasks.
We actually got more tasks that need to be achieved (didn't list here maybe
because it's already start implemented or still under planning), so if you
didn't see any interesting task above, you can reach out to me and let me
know which specific area you're interested in. Also, you might want to go
through [1] or talk to other team members to see if any more comments added
in before you start working on any task.

Now here are some targets that we start to discuss or work in progress

   - Multi-cloud support
   - Within [5], we propose the ability to do multi-cloud orchestration,
  and the follow-on discussion is how can we provide the ability to use
  customized SSL options for multi-cloud or multi-region
  orchestration without violating any security concerns. What we plan to do
  now (after discussing with security sig (Barbican team)) is to
only support
  cacert for SSL which is less sensitive. Use a template file to store that
  cacert and give it to keystone session for providing SSL ability to
  connections. If that sounds like a good idea to all without much
concerns,
  I will implement it asap.
   - Autoscaling and self-healing improvement
  - This is a big complex task for sure and kind of relative to
  multiple projects. We got a fair number of users using
Autoscaling feature,
  but not much for self-healing for now. So we will focus on each
feature and
  the integration of two feature separately.
  - First, Heat got the ability to orchestrate autoscaling, but we need
  to improve the stability. Still go around our code base to see how can we
  modulize current implementation, and how can we improve from
here, but will
  update more information for all. We also starting to discuss autoscaling
  integration [3], which hopefully we can get a better solution and combine
  forces from Heat and Senlin as a long-term target. Please give your
  feedback if you also care about this target.
  - For self-healing, we propose some co-work on cross-project gatting
  in Self-healing-sig, which we still not generate tempest test out, but
  assume we can start to set up job and raise discussion for how
can we help
  projects to adopt that job. Also, we got discussions with Octavia team
  ([7], and [8]) and Monasca team about adopting the ability to
support event
  alarm/notification. Which we plan to put into actions. If anyone
also think
  those are important features, please provide your development
resources so
  we can get those feature done in this cycle.
  - For integrating two scenarios, I try to add more tasks into [6] and
  eliminate as many as we can. Also, plan to work on document
these scenarios
  down, so everyone can play with autoscaling+self-healing easily.
   - Glance resource update
  - We deprecate image resource in 

[openstack-dev] [keystone] noop role in openstack

2018-09-19 Thread Adrian Turjak
For Adam's benefit continuing this a bit in email:

regarding the noop role:

http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-09-20.log.html#t2018-09-20T04:13:43

The first benefit of such a role (in the given policy scenario) is that
you can now give a user explicit scope on a project (but they can't do
anything) and then use that role for Swift ACLs with full knowledge they
can't do anything other than auth, scope to the project, and then
whatever the ACLs let them do. An example use case being: "a user that
can ONLY talk to a specific container and NOTHING else in OpenStack or
Swift" which is really useful if you want to use a single project for a
lot of websites, or backups, or etc.

Or in my MFA case, a role I can use when wanting a user to still be able
to auth and setup their MFA, but not actually touch any resources until
they have MFA setup at which point you give them back their real member
role.

It all relies on leaving no policy rules 'empty' unless those rules (and
their API) really are safe for a noop role. And by empty I don't mean
empty, really I mean "any role on a project". Because that's painful to
then work with.

With the default policies in Nova (and most other projects), you can't
actually make proper use of Swift ACLs, because having any role on a
project gives you access to all the resources. Like say:
https://github.com/openstack/nova/blob/master/nova/policies/base.py#L31

^ that rule implies, if you are scoped to the project, don't care about
the role, you can do anything to the resources. That doesn't work for
anything role specific. Such rules would need to be:
"is_admin:True or (role:member and project_id:%(project_id)s)"

If we stop with this assumption that "any role" on a project works,
suddenly policy becomes more powerful and the roles are actually useful
beyond admin vs not admin. System scope will help, but then we'll still
only have system scope, admin on a project, and not admin on a project,
which still makes the role mostly pointless.

We as a community need to stop with this assumption (that "any role" on
a project works), because it hurts us in regards to actually useful
RBAC. Yes deployers can edit the policy to avoid the any role on a
project issue (we have), but it's a huge amount of work to figure out
that we could all work together and fix upstream.

Part of that work is actually happening. With the default roles that
Keystone is defining, and system scope. We can then start updating all
the project default policies to actually require those roles explicitly,
but that effort, needs us to get everyone on board...


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: Denver Ops Meetup post-mortem

2018-09-19 Thread Kendall Nelson
Hello!

On Tue, Sep 18, 2018 at 12:36 PM Chris Morgan  wrote:

>
>
> -- Forwarded message -
> From: Chris Morgan 
> Date: Tue, Sep 18, 2018 at 2:13 PM
> Subject: Denver Ops Meetup post-mortem
> To: OpenStack Operators 
>
>
>  Hello All,
>   Last week we had a successful Ops Meetup embedded in the OpenStack
> Project Team Gathering in Denver.
>
> Despite generally being a useful gathering, there were definitely lessons
> learned and things to work on, so I thought it would be useful to share a
> post-mortem. I encourage everyone to share their thoughts on this as well.
>
> What went well:
>
> - some of the sessions were great and a lot of progress was made
> - overall attendance in the ops room was good
> - more developers were able to join the discussions
> - facilities were generally fine
> - some operators leveraged being at PTG to have useful involvement in
> other sessions/discussions such as Keystone, User Committee, Self-Healing
> SIG, not to mention the usual "hallway conversations", and similarly some
> project devs were able to bring pressing questions directly to operators.
>
> What didn't go so well:
>
> - Merging into upgrade SIG didn't go particularly well
> - fewer ops attended (in particular there were fewer from outside the US)
> - Some of the proposed sessions were not well vetted
> - some ops who did attend stated the event identity was diluted, it was
> less attractive
> - we tried to adjust the day 2 schedule to include late submissions,
> however it was probably too late in some cases
>
> I don't think it's so important to drill down into all the whys and
> wherefores of how we fell down here except to say that the ops meetups team
> is a small bunch of volunteers all with day jobs (presumably just like
> everyone else on this mailing list). The usual, basically.
>
> Much more important : what will be done to improve things going forward:
>
> - The User Committee has offered to get involved with the technical
> content. In particular to bring forward topics from other relevant events
> into the ops meetup planning process, and then take output from ops meetups
> forward to subsequent events. We (ops meetup team) have welcomed this.
>
> - The Ops Meetups Team will endeavor to start topic selection earlier and
> have a more critical approach. Having a longer list of possible sessions
> (when starting with material from earlier events) should make it at least
> possible to devise a better agenda. Agenda quality drives attendance to
> some extent and so can ensure a virtuous circle.
>
> - We need to work out whether we're doing fixed schedule events (similar
> to previous mid-cycle Ops Meetups) or fully flexible PTG-style events, but
> grafting one onto the other ad-hoc clearly is a terrible idea. This needs
> more discussion.
>
> - The Ops Meetups Team continues to explore strange new worlds, or at
> least get in touch with more and more OpenStack operators to find out what
> the meetups team and these events could do for them and hence drive the
> process better. One specific work item here is to help the (widely
> disparate) operator community with technical issues such as getting setup
> with the openstack git/gerrit and IRC. The latter is the preferred way for
> the community to meet, but is particularly difficult now with the
> registered nickname requirement. We will add help documentation on how to
> get over this hurdle.
>

After you get onto freenode at IRC you can register your nickname with a
single command and then you should be able to join any of the channels. The
command you need: ' /msg nickserv register $PASSWORD $EMAIL_ADDRESS'. You
can find more instructions here about setting up IRC[1].

If you get stuck or have any questions, please let me know! I am happy to
help with the setup of IRC or gerrit or anything else that might be a
barrier.


> - YOUR SUGGESTION HERE
>
> Chris
>
> --
> Chris Morgan 
>
>
> --
> Chris Morgan 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-Kendall Nelson (diablo_rojo)

[1] https://docs.openstack.org/contributors/common/irc.html#
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] When can/should we change additionalProperties=False in GET /servers(/detail)?

2018-09-19 Thread Ghanshyam Mann



  On Wed, 19 Sep 2018 02:26:30 +0900 Matt Riedemann  
wrote  
 > On 9/17/2018 9:41 PM, Ghanshyam Mann wrote:
 > >    On Tue, 18 Sep 2018 09:33:30 +0900 Alex Xu  wrote 
 > > 
 > >   > That only means after 599276 we only have servers API and 
 > > os-instance-action API stopped accepting the undefined query parameter.
 > >   > What I'm thinking about is checking all the APIs, add json-query-param 
 > > checking with additionalProperties=True if the API don't have yet. And 
 > > using another microversion set additionalProperties to False, then the 
 > > whole Nova API become consistent.
 > > 
 > > I too vote for doing it for all other API together. Restricting the 
 > > unknown query or request param are very useful for API consistency. Item#1 
 > > in this etherpadhttps://etherpad.openstack.org/p/nova-api-cleanup
 > > 
 > > If you would like, i can propose a quick spec for that and positive 
 > > response to do all together then we skip to do that in 599276 otherwise do 
 > > it for GET servers in 599276.
 > > 
 > > -gmann
 > 
 > I don't care too much about changing all of the other 
 > additionalProperties=False in a single microversion given we're already 
 > kind of inconsistent with this in a few APIs. Consistency is ideal, but 
 > I thought we'd be lumping in other cleanups from the etherpad into the 
 > same microversion/spec which will likely slow it down during spec 
 > review. For example, I'd really like to get rid of the weird server 
 > response field prefixes like "OS-EXT-SRV-ATTR:". Would we put those into 
 > the same mass cleanup microversion / spec or split them into individual 
 > microversions? I'd prefer not to see an explosion of microversions for 
 > cleaning up oddities in the API, but I could see how doing them all in a 
 > single microversion could be complicated.

Sounds good to me. I also do not feel like increasing  microversions for every 
cleanup. I would like to see all cleanup(worthy cleanup) in single 
microversion. I have pushed the spec for that for further discussion/debate. - 
https://review.openstack.org/#/c/603969/

-gmann
 > -- 
 > 
 > Thanks,
 > 
 > Matt
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][i18n][ptg] Stein PTG Summary

2018-09-19 Thread Ian Y. Choi
Thanks a lot for nice summary on Docs part especially!
I would like to add an additional summary with more context from I18n
perspective.

Note: I mainly participated in Docs/I18n discussion only on Monday & Tuesday
(not available during Wed - Fri due to conflicts with other work in my
country),
and my summary would be different from current I18n PTL if he have
parcipated in Stein PTG,
but I would like to summarize as I18n ex-PTL (Ocata, Pike) and as one of
active partcipants in Docs/I18n discussion.

Documentation & I18n teams started to have collaborative discussions
from Pike PTG.
Following with Queens & Rocky cycle, I am so happy that Documentation &
I18n teams had tight collaboration
again at Stein PTG for horizontal discussion with sharing issues and
tight collaboration.

More details for I18n issues are available at the bottom part ("i18n
Topics") in:
https://etherpad.openstack.org/p/docs-i18n-ptg-stein

PROJECT DOCUMENTATION TRANSLATION SUPPORT

This year, I18n team actively started to support project documentation
translation [1] and there had progress
on defining documentation translation targets, generatepot infra jobs,
and translation sync on from project repositories to
Zanata for translation sources & from Zanata to project repositories for
translated strings.
[2] and [3] are parts of work I18n team did on previous cycle, and the
final part would be how to support translated documentation publication
aligned with Documentation team, since PDF support implementation is
also related with how to publish PDF files for project repositories.

Although there were so nice discussion during last Vancouver Summit [4],
more generic idea on infra side how to better support translated
documentation & PDF builds and translation
would be needed after some changes on Project Testing Interface which is
used for project documentation builds [5].

[6] is a nice summary from Doug (really appreciated!) for the direction
and plans on PDF and translation builds
by making use of openstack-tox-docs job [7], and I18n team would like to
continue to work with Documentation
and Infrastructure teams on actual implementation suring Stein cycle.

USER SURVEY, TRANSLATING WHITEPAPERS, AND RECOGNITION ON TRANSLATORS

With nice collaboration between Foundation and I18n team, I18n team
started to translate
OpenStack user survey [8] after initial discussion on Pike PTG and, edge
computing whitepaper [9],
and container whitepaer [10] into multiple languages with many language
coordinators and translators.

Those translation effort might be different from Active Technical
Contributor (ATC) recognition
which translators also got for OpenStack project translation and
techical documentation translation [11].
Some translators shared that they would be interested in translating
technical documents but not interested in
OpenStack user survey and other non-technical documents.

I thought that it might be due to different governance (Foundation-led &
official projects with technical committee might be different),
and Active User Contributor (AUC) [12] recognition might be a good idea.
Although I could not discuss in details with User Committee members
during PTG,
Foundation agreed that AUC recognition for such translators would be a
good idea and Melvin,
one of user committee agreed with the idea during a very short conversation.
In my opinion, it would take some time for more discussion and agreement
on detail criteria
(e.g., which number of words might be good aligning with current AUC
recognition criteria), User Committee, and Foundation),
but let's try to move forward on this :)

Also, documentation on detail steps and activities with user survey on
further years and more on whitepapers
would be important, so I18n team will more document how I18n team
members do action with steps like [13].

And some translators shared concerns what there is no I18n in OpenStack
project navigator and map.
It would be also related with recognition on what translators contributed.
Foundation explained that it might be the intention of the purpose of
each artifact
(e.g., OpenStack were designed to show OpenStack components and how
those components interact each other with technical perspective),
and Foundation shared that Foundation would do best to aggregate
scattered definitions (e.g., [14], [15], and somewhere more)
for consistency, and find a nice place for Docs, i18n, Congress, etc...
on the Software/Project Navigator.

TRANSLATING CONTRIBUTOR GUIDE WITH FIRST CONTACT SIG

The detail summary is in [16]. To summarize,
 - I shared I18n process to First Contact SIG members. Participants
acknowledged that
   setting *translation period* would be needed but starting from
initial translation process
   would be a good idea since it is not translated yet
 - Participants think that user groups would be interested in
translating contributor guide.
 - Me will setup translation sync and publication on contributor guide
and Kendall Nelson would kindly
   help 

Re: [openstack-dev] [Openstack-sigs] [Openstack-operators] [tc]Global Reachout Proposal

2018-09-19 Thread Anita Kuno

On 2018-09-18 08:40 AM, Jeremy Stanley wrote:

On 2018-09-18 11:26:57 +0900 (+0900), Ghanshyam Mann wrote:
[...]

I can understand that IRC cannot be used in China which is very
painful and mostly it is used weChat.

[...]

I have yet to hear anyone provide first-hand confirmation that
access to Freenode's IRC servers is explicitly blocked by the
mainland Chinese government. There has been a lot of speculation
that the usual draconian corporate firewall policies (surprise, the
rest of the World gets to struggle with those too, it's not just a
problem in China) are blocking a variety of messaging protocols from
workplace networks and the people who encounter this can't tell the
difference because they're already accustomed to much of their other
communications being blocked at the border. I too have heard from
someone who's heard from someone that "IRC can't be used in China"
but the concrete reasons why continue to be missing from these
discussions.



I'll reply to this email arbitrarily in order to comply with Zhipeng 
Huang's wishes that the conversation concerned with understanding the 
actual obstacles to communication takes place on the mailing list. I do 
hope I am posting to the correct thread.


In response to part of your comment on the patch at 
https://review.openstack.org/#/c/602697/ which you posted about 5 hours 
ago you said "@Anita you are absolutely right it is only me stuck my 
head out speaks itself the problem I stated in the patch. Many of the 
community tools that we are comfortable with are not that accessible to 
a broader ecosystem. And please assured that I meant I refer the patch 
to the Chinese community, as Leong also did on the ML, to try to bring 
them over to join the convo." and I would like to reply.


I would like to say that I am honoured by your generosity. Thank you. 
Now, when the Chinese community consumes the patch, as well as the 
conversation in the comments, please encourage folks to ask for 
clarification if any descriptions or phrases don't make sense to them. 
One of the best ways of ensuring clear communication is to start off 
slowly and take the time to ask what the other side means. It can seem 
tedious and a waste of time, but I have found it to be very educational 
and helpful in understanding how the other person perceives the 
situation. It also helps me to understand how I am creating obstacles in 
ways that I talk.


Taking time to clarify helps me to adjust how I am speaking so that my 
meaning is more likely to be understood by the group to which I am 
trying to offer my perspective. I do appreciate that many people are 
trying to avoid embarrassment, but I have never found any way to 
understand people in a culture that is not the one I group up in, other 
than embarrassing myself and working through it. Usually I find the 
group I am wanting to understand is more than willing to rescue me from 
my embarrassment and support me in my learning. In a strange way, the 
embarrassment is kind of helpful in order to create understanding 
between myself and those people I am trying to understand.


Thank you, Anita

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Nominating Ian Y. Choi for openstack-doc-core

2018-09-19 Thread Chan Chason
+1

> 在 2018年9月20日,上午2:50,Petr Kovar  写道:
> 
> Hi all,
> 
> Based on our PTG discussion, I'd like to nominate Ian Y. Choi for
> membership in the openstack-doc-core team. I think Ian doesn't need an
> introduction, he's been around for a while, recently being deeply involved
> in infra work to get us robust support for project team docs translation and
> PDF builds. 
> 
> Having Ian on the core team will also strengthen our integration with
> the i18n community.
> 
> Please let the ML know should you have any objections.
> 
> Thanks,
> pk
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper"

2018-09-19 Thread Matthew Treinish
On Thu, Sep 20, 2018 at 11:11:12AM +0900, Ghanshyam Mann wrote:
>   On Wed, 19 Sep 2018 23:29:46 +0900 Monty Taylor  
> wrote  
>  > On 09/19/2018 09:23 AM, Monty Taylor wrote:
>  > > On 09/19/2018 08:25 AM, Chris Dent wrote:
>  > >>
>  > >> I have a patch in progress to add some simple integration tests to
>  > >> placement:
>  > >>
>  > >>  https://review.openstack.org/#/c/601614/
>  > >>
>  > >> They use https://github.com/cdent/gabbi-tempest . The idea is that
>  > >> the method for adding more tests is to simply add more yaml in
>  > >> gate/gabbits, without needing to worry about adding to or think
>  > >> about tempest.
>  > >>
>  > >> What I have at that patch works; there are two yaml files, one of
>  > >> which goes through the process of confirming the existence of a
>  > >> resource provider and inventory, booting a server, seeing a change
>  > >> in allocations, resizing the server, seeing a change in allocations.
>  > >>
>  > >> But this is kludgy in a variety of ways and I'm hoping to get some
>  > >> help or pointers to the right way. I'm posting here instead of
>  > >> asking in IRC as I assume other people confront these same
>  > >> confusions. The issues:
>  > >>
>  > >> * The associated playbooks are cargo-culted from stuff labelled
>  > >>"legacy" that I was able to find in nova's jobs. I get the
>  > >>impression that these are more verbose and duplicative than they
>  > >>need to be and are not aligned with modern zuul v3 coolness.
>  > > 
>  > > Yes. Your life will be much better if you do not make more legacy jobs. 
>  > > They are brittle and hard to work with.
>  > > 
>  > > New jobs should either use the devstack base job, the devstack-tempest 
>  > > base job or the devstack-tox-functional base job - depending on what 
>  > > things are intended.
> 
> +1. All the base job from Tempest and Devstack (except grenade which is in 
> progress) are available to use as base for legacy jobs. Using 
> devstack-temepst in your patch is right things. In addition, you need to 
> mention the tox_envlist as all-plugins to make tempest_test_regex work. I 
> commented on review. 

No, all-plugins is incorrect and should never be used. It's only there for
legacy support, it is deprecated and I thought we pushed a patch to indicating
that (but I can't find it). It tells tox to create a venv with system
site-packages enabled and that almost always causes more problems than it
fixes. Specifying the plugin with TEMPEST_PLUGINS will make sure the plugin is
installed in tempest's venv, and if you need to run a tox job without a preset
selection regex (so you can specify your own) you should use the "all" job.
(not all-plugins)

-Matt Treinish

> 
>  > > 
>  > > You might want to check out:
>  > > 
>  > > https://docs.openstack.org/devstack/latest/zuul_ci_jobs_migration.html
>  > > 
>  > > also, cmurphy has been working on updating some of keystone's legacy 
>  > > jobs recently:
>  > > 
>  > > https://review.openstack.org/602452
>  > > 
>  > > which might also be a source for copying from.
>  > > 
>  > >> * It takes an age for the underlying devstack to build, I can
>  > >>presumably save some time by installing fewer services, and making
>  > >>it obvious how to add more when more are required. What's the
>  > >>canonical way to do this? Mess with {enable,disable}_service, cook
>  > >>the ENABLED_SERVICES var, do something with required_projects?
>  > > 
>  > > http://git.openstack.org/cgit/openstack/openstacksdk/tree/.zuul.yaml#n190
>  > > 
>  > > Has an example of disabling services, of adding a devstack plugin, and 
>  > > of adding some lines to localrc.
>  > > 
>  > > 
>  > > http://git.openstack.org/cgit/openstack/openstacksdk/tree/.zuul.yaml#n117
>  > > 
>  > > Has some more complex config bits in it.
>  > > 
>  > > In your case, I believe you want to have parent: devstack-tempest 
>  > > instead of parent: devstack-tox-functional
>  > > 
>  > > 
>  > >> * This patch, and the one that follows it [1] dynamically install
>  > >>stuff from pypi in the post test hooks, simply because that was
>  > >>the quick and dirty way to get those libs in the environment.
>  > >>What's the clean and proper way? gabbi-tempest itself needs to be
>  > >>in the tempest virtualenv.
>  > > 
>  > > This I don't have an answer for. I'm guessing this is something one 
>  > > could do with a tempest plugin?
>  > 
>  > K. This:
>  > 
>  > 
> http://git.openstack.org/cgit/openstack/neutron-tempest-plugin/tree/.zuul.yaml#n184
> 
> Yeah, You can install that via TEMPEST_PLUGINS var. All plugins specified in 
> TEMPEST_PLUGINS var, will be installed into the tempest venv[1]. You can 
> mention the gabbi-tempest same way. 
> 
> [1] 
> https://github.com/openstack-dev/devstack/blob/6f4b7fc99c4029d25a924bcad968089d89e9d296/lib/tempest#L663
> 
> -gmann
> 
>  > 
>  > Has an example of a job using a tempest plugin.
>  > 
>  > >> * The post.yaml playbook which gathers up 

Re: [openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper"

2018-09-19 Thread Ghanshyam Mann
  On Wed, 19 Sep 2018 23:29:46 +0900 Monty Taylor  
wrote  
 > On 09/19/2018 09:23 AM, Monty Taylor wrote:
 > > On 09/19/2018 08:25 AM, Chris Dent wrote:
 > >>
 > >> I have a patch in progress to add some simple integration tests to
 > >> placement:
 > >>
 > >>  https://review.openstack.org/#/c/601614/
 > >>
 > >> They use https://github.com/cdent/gabbi-tempest . The idea is that
 > >> the method for adding more tests is to simply add more yaml in
 > >> gate/gabbits, without needing to worry about adding to or think
 > >> about tempest.
 > >>
 > >> What I have at that patch works; there are two yaml files, one of
 > >> which goes through the process of confirming the existence of a
 > >> resource provider and inventory, booting a server, seeing a change
 > >> in allocations, resizing the server, seeing a change in allocations.
 > >>
 > >> But this is kludgy in a variety of ways and I'm hoping to get some
 > >> help or pointers to the right way. I'm posting here instead of
 > >> asking in IRC as I assume other people confront these same
 > >> confusions. The issues:
 > >>
 > >> * The associated playbooks are cargo-culted from stuff labelled
 > >>"legacy" that I was able to find in nova's jobs. I get the
 > >>impression that these are more verbose and duplicative than they
 > >>need to be and are not aligned with modern zuul v3 coolness.
 > > 
 > > Yes. Your life will be much better if you do not make more legacy jobs. 
 > > They are brittle and hard to work with.
 > > 
 > > New jobs should either use the devstack base job, the devstack-tempest 
 > > base job or the devstack-tox-functional base job - depending on what 
 > > things are intended.

+1. All the base job from Tempest and Devstack (except grenade which is in 
progress) are available to use as base for legacy jobs. Using devstack-temepst 
in your patch is right things. In addition, you need to mention the tox_envlist 
as all-plugins to make tempest_test_regex work. I commented on review. 

 > > 
 > > You might want to check out:
 > > 
 > > https://docs.openstack.org/devstack/latest/zuul_ci_jobs_migration.html
 > > 
 > > also, cmurphy has been working on updating some of keystone's legacy 
 > > jobs recently:
 > > 
 > > https://review.openstack.org/602452
 > > 
 > > which might also be a source for copying from.
 > > 
 > >> * It takes an age for the underlying devstack to build, I can
 > >>presumably save some time by installing fewer services, and making
 > >>it obvious how to add more when more are required. What's the
 > >>canonical way to do this? Mess with {enable,disable}_service, cook
 > >>the ENABLED_SERVICES var, do something with required_projects?
 > > 
 > > http://git.openstack.org/cgit/openstack/openstacksdk/tree/.zuul.yaml#n190
 > > 
 > > Has an example of disabling services, of adding a devstack plugin, and 
 > > of adding some lines to localrc.
 > > 
 > > 
 > > http://git.openstack.org/cgit/openstack/openstacksdk/tree/.zuul.yaml#n117
 > > 
 > > Has some more complex config bits in it.
 > > 
 > > In your case, I believe you want to have parent: devstack-tempest 
 > > instead of parent: devstack-tox-functional
 > > 
 > > 
 > >> * This patch, and the one that follows it [1] dynamically install
 > >>stuff from pypi in the post test hooks, simply because that was
 > >>the quick and dirty way to get those libs in the environment.
 > >>What's the clean and proper way? gabbi-tempest itself needs to be
 > >>in the tempest virtualenv.
 > > 
 > > This I don't have an answer for. I'm guessing this is something one 
 > > could do with a tempest plugin?
 > 
 > K. This:
 > 
 > http://git.openstack.org/cgit/openstack/neutron-tempest-plugin/tree/.zuul.yaml#n184

Yeah, You can install that via TEMPEST_PLUGINS var. All plugins specified in 
TEMPEST_PLUGINS var, will be installed into the tempest venv[1]. You can 
mention the gabbi-tempest same way. 

[1] 
https://github.com/openstack-dev/devstack/blob/6f4b7fc99c4029d25a924bcad968089d89e9d296/lib/tempest#L663

-gmann

 > 
 > Has an example of a job using a tempest plugin.
 > 
 > >> * The post.yaml playbook which gathers up logs seems like a common
 > >>thing, so I would hope could be DRYed up a bit. What's the best
 > >>way to that?
 > > 
 > > Yup. Legacy devstack-gate based jobs are pretty terrible.
 > > 
 > > You can delete the entire post.yaml if you move to the new devstack base 
 > > job.
 > > 
 > > The base devstack job has a much better mechanism for gathering logs.
 > > 
 > >> Thanks very much for any input.
 > >>
 > >> [1] perf logging of a loaded placement: 
 > >> https://review.openstack.org/#/c/602484/
 > >>
 > >>
 > >>
 > >> __
 > >>  
 > >>
 > >> OpenStack Development Mailing List (not for usage questions)
 > >> Unsubscribe: 
 > >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > >> 

Re: [openstack-dev] Fwd: Denver Ops Meetup post-mortem

2018-09-19 Thread Jimmy McArthur
Thanks for the thorough write-up as well as the detailed feedback.  I'm 
including some of my notes from the Ops Meetup Feedback session just a 
bit below, as well as some comments inline.


One of the critical things that would help both the Ops and Dev 
community is to have a holistic sense of what the Ops Meetup goals are.


 * Were the goals well defined ahead of the event?
 * Were they achieved and/or how can the larger OpenStack community
   help them achieve them?

From our discussion at the Feedback session, this isn't something that 
has been tracked in the past.  Having actionable, measurable goals 
coming out of the Ops Meetup could go a long way towards helping the 
projects realize them.  Per our discussion, being able to present this 
list to the User Committee would be a good step forward for each event.


I wasn't able to attend the entire time, but a couple of interesting notes:

 * The knowledge of deployment tools seemed pretty fragmented and it
   seemed like there was a desire for more clear and comprehensive
   documentation comparing the different deployment options, as well as
   documentation about how to get started with a POC.
 * Bare Metal in the Datacenter: It was clear that we need more Ironic
   101 content and education, including how to get started, system
   requirements, etc. We can dig up presentations from previous Summits
   and also talked to TheJulia about potentially hosting a community
   meeting or producing another video leading up to the Berlin Summit.
 * Here are the notes from the sessions in case anyone on the ops list
   is interested:
   https://etherpad.openstack.org/p/ops-meetup-ptg-denver-2018

It looks like there were some action items documented at the bottom of 
this etherpad: https://etherpad.openstack.org/p/ops-denver-2018-further-work


Ops Meetup Feedback Takeways from Feedback Session not covered below 
(mostly from https://etherpad.openstack.org/p/uc-stein-ptg)

Chris Morgan wrote:

--SNIP --

What went well

- some of the sessions were great and a lot of progress was made
- overall attendance in the ops room was good
We had to add 5 tables to accommodate the additional attendees. It was a 
great crowd!

- more developers were able to join the discussions
Given that this is something that wouldn't happen at a normal Ops 
Meetup, is there a way that would meet the Ops Community needs that we 
could help facilitate this int he future?

- facilities were generally fine
- some operators leveraged being at PTG to have useful involvement in 
other sessions/discussions such as Keystone, User Committee, 
Self-Healing SIG, not to mention the usual "hallway conversations", 
and similarly some project devs were able to bring pressing questions 
directly to operators.


What didn't go so well:

- Merging into upgrade SIG didn't go particularly well
This is a tough one b/c of the fluidity of the PTG. Agreed that one can 
end up missing a good chunk of the discussion.  OTOH, the flexibility of 
hte event is what allows great discussions to take place.  In the 
future, I think better coordination w/ specific project teams + updating 
the PTGBot could help make sure the schedules are in synch.

- fewer ops attended (in particular there were fewer from outside the US)
Do you have demographics on the Ops Meetup in Japan or NY?  Curious to 
know how those compare to what we saw in Denver.  If there is more 
promotion needed, or indeed these just end up being more 
continent/regionally focused?

- Some of the proposed sessions were not well vetted
Are there any suggestions on how to improve this moving forward?  
Perhaps a CFP style submission process, with a small vetting group, 
could help this situation?  My understanding was the Tokyo event, 
co-located with OpenStack Days, didn't suffer this problem.
- some ops who did attend stated the event identity was diluted, it 
was less attractive
I'd love some more info on this. Please have these people reach out to 
let me know how we can fix this in the future.  Even if we decide not to 
hold another Ops Meetup at a PTG, this is relevant to how we run events.
- we tried to adjust the day 2 schedule to include late submissions, 
however it was probably too late in some cases


I don't think it's so important to drill down into all the whys and 
wherefores of how we fell down here except to say that the ops meetups 
team is a small bunch of volunteers all with day jobs (presumably just 
like everyone else on this mailing list). The usual, basically.


Much more important : what will be done to improve things going forward:

- The User Committee has offered to get involved with the technical 
content. In particular to bring forward topics from other relevant 
events into the ops meetup planning process, and then take output from 
ops meetups forward to subsequent events. We (ops meetup team) have 
welcomed this.
This is super critical IMO.  One of the things we discussed at the Ops 
Meetup Feedback session (co-located 

[openstack-dev] [cinder] Proposed Changes to the Core Team ...

2018-09-19 Thread Jay S Bryant

All,

In the last year we have had some changes to Core team participation.  
This was a topic of discussion at the PTG in Denver last week.  Based on 
that discussion I have reached out to John Griffith and Winston D (Huang 
Zhiteng) and asked if they felt they could continue to be a part of the 
Core Team.  Both agreed that it was time to relinquish their titles.


So, I am proposing to remove John Griffith and Winston D from Cinder 
Core.  If I hear no concerns with this plan in the next week I will 
remove them.


It is hard to remove people who have been so instrumental to the early 
days of Cinder.  Your past contributions are greatly appreciated and the 
team would be happy to have you back if circumstances every change.


Sincerely,
Jay Bryant

(jungleboyj)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack-operators][horizon] Dashboard memory leaks

2018-09-19 Thread Xingchao
Hi All,

Recently, we found the server which hosts horizon dashboard had serveral
times OOM caused by horizon services. After restarting the dashboard, the
memory usage goes up very quickly if we access /project/network_topology/
path.

*How to reproduce*

Login into the dashboard and go to 'Network Topology' tab, then leave it
there (autorefresh 10s by default), now monitor the memory changes on the
host.

*Versions and Components*

Dashboard: Stable/Pike
Server: uWSGI 1.9.17-1
OS: Ubuntu 14.04 trusty
Python: 2.7.6

As the codes of memoized has little changes since Pike, if you use
Queen/Rocky release, you may also succeed to reproduce it.

*The investigation*

The root cause of the memory leak is the decorator
memorized(horizon/utils/memoized.py) which is used to cache function calls
in Horizon.

After disable it, the memory increases has been controlled.

The following is the comparison of memory change(with guppy) for each
request of /project/network_topology:

 - original (no code change) 684kb

 - do garbage collection manually 185kb

 - disable memorize cache 10kb

As we known, memoized uses weakref to cache objects. A weak reference to an
object is not enough to keep the object alive: when the only remaining
references to a referent are weak references, garbage collection is free to
destroy the referent and reuse its memory for something else.

In the memory, we could see lots of weakref stuffs, the following is a
example:




*Partition of a set of 394 objects. Total size = 37824 bytes. Index Count %
Size % Cumulative % Kind (class / dict of class) 0 197 50 18912 50
18912 50 _cffi_backend.CDataGCP 1 197 50 18912 50 37824 100
weakref.KeyedRefq*

But the rest of them are not. the following result is the memory objects
changes of per /project/network_topology access with garbage collection
manually.












*Partition of a set of 1017 objects. Total size = 183680 bytes. Index Count
% Size % Cumulative % Referrers by Kind (class / dict of class) 0 419
41 58320 32 58320 32 dict (no owner) 1 100 10 23416 13 81736 44
list 2 135 13 15184 8 96920 53  3 2 0 6704 4 103624 56
urllib3.connection.VerifiedHTTPSConnection 4 2 0 6704 4 110328 60
urllib3.connectionpool.HTTPSConnectionPool 5 1 0 3352 2 113680 62
novaclient.v2.client.Client 6 2 0 2096 1 115776 63
OpenSSL.SSL.Connection 7 2 0 2096 1 117872 64 OpenSSL.SSL.Context 8
2 0 2096 1 119968 65 Queue.LifoQueue 9 12 1 2096 1 122064 66 dict of
urllib3.connectionpool.HTTPSConnectionPool*

The most of them are dicts. Followings are the dicts sorted by class, as
you can see most of them are not weakref objects:








*Partition of a set of 419 objects. Total size = 58320 bytes. Index Count %
Size % Cumulative % Class 0 362 86 50712 87 50712 87 unicode 1 27 6
3736 6 54448 93 list 2 5 1 2168 4 56616 97 dict 3 22 5 1448 2 58064
100 str 4 2 0 192 0 58256 100 weakref.KeyedRef 5 1 0 64 0 58320 100
keystoneauth1.discover.Discover*

*The issue*

So the problem is that memoized does not work like what we expect. It
allocates memory to cache objects but some of them could not be released.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs][i18n][ptg] Stein PTG Summary

2018-09-19 Thread Petr Kovar
Hi all,

Just wanted to share a summary of docs- and i18n-related meetings
and discussions we had in Denver last week during the Stein Project
Teams Gathering.

The overall schedule for all our sessions with additional comments and
meeting minutes can be found here:

https://etherpad.openstack.org/p/docs-i18n-ptg-stein

Our obligatory team picture (with quite a few members missing) can be
found here (courtesy of Foundation folks):

https://pmkovar.fedorapeople.org/DSC_4422.JPG

To summarize what I found most important:

OPS DOCS

We met with the Ops community to discuss the future of Ops docs. The plan
is for the Ops group to take ownership of the operations-guide (done),
ha-guide (in progress), and the arch-design guide (to do).

These three documents are being moved from openstack-manuals to their own
repos, owned by the newly formed Operations Documentation SIG.

See also
https://etherpad.openstack.org/p/ops-meetup-ptg-denver-2018-operations-guide
for more notes.

DOCS SITE & DESIGN

We discussed improving the site navigation, guide summaries (particularly
install-guide), adding a new index page for project team contrib guides, and
more. We met with the Foundation staff to discuss the possibility of getting
assistance with site design work.

We are also looking into accepting contributions from the Strategic Focus
Areas folks to make parts of the docs toolchain like openstackdocstheme more
easily reusable outside of the official OpenStack infrastructure.

We got feedback on front page template for project team docs, with Ironic
being the pilot for us.

We got input on restructuring and reworking specs site to make it easier
for users to understand that specs are not feature descriptions nor project
docs, and to make it more consistent in how the project teams publish their
specs. This will need to be further discussed with the folks owning the
specs site infra.

Support status badges showing at the top of docs.o.o pages may not work well
for projects following the cycle-with-intermediary release model, such as
Swift. We need to rethink how we configure and present the badges. 

There are also some UX bugs present for the badges
(https://bugs.launchpad.net/openstack-doc-tools/+bug/1788389).

TRANSLATIONS

We met with the infra team to discuss progress on translating project team
docs and, related to that, PDF builds.

With the Foundation staff, we discussed translating Edge and Container
whitepapers and related material.

REFERENCE, REST API DOCS AND RELEASE NOTES

With the QA team, we discussed the scope and purpose of the
/doc/source/reference documentation area in project docs. Because the
scope of /reference might be unclear and used inconsistently by project
teams, the suggestion is to continue with the original migration plan and
migrate REST API and possibly Release Notes under /doc/source, as described
in https://docs.openstack.org/doc-contrib-guide/project-guides.html. 

CONTRIBUTOR GUIDE

The OpenStack Contributor Guide was discussed in a separate session, see
https://etherpad.openstack.org/p/FC_SIG_ptg_stein for notes.

THAT'S IT?

Please add to the list if I missed anything important, particularly for
i18n.

Thank you to everybody who attended the sessions, and a special thanks goes
to all the PTG organizers!

Cheers,
pk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [tc]Global Reachout Proposal

2018-09-19 Thread Zhipeng Huang
A quick sidenote for anyone is using riot.im but not super familiar with it
like I did ...

Its global search could not cache all the openstack channels, therefore you
have to manually join them via their own cmd lines as I recently discover
the methods ..
1. add @appservice-irc:matrix.org for friend
2. type in the console : !join chat.freenode.net #openstack-xxx

For registration, it is more wacky but I found it on google anyways:
1. add @appservice-irc:matrix.org for friend
2. type in the console : !storepass chat.freenode.net PASSWORD
3. add NickServ(IRC) for friend
4. type in the console : identify NICK PASSWORD

viola 

On Thu, Sep 20, 2018 at 1:34 AM Melvin Hillsman 
wrote:

> Regarding some web clients that are potentially useful
>
> https://webchat.freenode.net/
>   - Zane mentioned this already and I can say I tried/used it some time
> ago until I opted for CLI/alternatives
> https://riot.im (iOS and Android apps available along with online client)
>   - i find it a bit sluggish at times, others have not, either way it is a
> decent alternative
> https://thelounge.chat/
>   - have not tried it yet but looks promising especially self-hosted option
> https://irccloud.com
>   - what I currently use, I do believe it can be blocked, i am looking
> into riot and thelounge tbh
>
>
> On Wed, Sep 19, 2018 at 12:18 PM Zane Bitter  wrote:
>
>> On 18/09/18 9:10 PM, Jaesuk Ahn wrote:
>> > On Wed, Sep 19, 2018 at 5:30 AM Zane Bitter > > > wrote:
>>
>> Resotring the whole quote here because I accidentally sent the original
>> to the -sigs list only and not the -dev list.
>>
>> >> As others have mentioned, I think this is diving into solutions when
>> we haven't defined the problems. I know you mentioned it briefly in the PTG
>> session, but that context never made it to the review or the mailing list.
>> >>
>> >> So AIUI the issue you're trying to solve here is that the TC members
>> seem distant and inaccessible to Chinese contributors because we're not on
>> the same social networks they are?
>> >>
>> >> Perhaps there are others too?
>> >>
>> >> Obvious questions to ask from there would be:
>> >>
>> >> - Whether this is the most important issue facing contributors from
>> the APAC region
>> >>
>> >> - To what extent the proposed solution is expected to help
>> >
>> >
>> > I do agree with Zane on the above point.
>>
>> For the record, I didn't express an opinion. I'm just pointing out what
>> the questions are.
>>
>> > As one of OpenStack participants from Asia region, I will put my
>> > personal opinion.
>> > IRC and ML has been an unified and standard way of communication in
>> > OpenStack Community, and that has been a good way to encourage "open
>> > communication" on a unified method wherever you are from, or whatever
>> > background you have. If the whole community start recognize some other
>> > tools (say WeChat) as recommended alternative communication method
>> > because there are many people there, ironically, it might be a way to
>> > break "diversity" and "openness" we want to embrace.
>> >
>> > Using whatever social media (or tools) in a specific region due to any
>> > reason is not a problem. Anyone is free to use anything. Only thing we
>> > need to make sure is, if you want to communicate officially with the
>> > whole community, there is a very well defined and unified way to do it.
>> > This is currently IRC and ML. Some of Korean dev has difficulties to
>> use
>> > IRC. However, there is not a perfect tool out there in this world, and
>> > we accept all the reason why the community selected IRC as official tool
>> >
>> > But, that being said, There are some things I am facing with IRC from
>> > here in Korea
>> >
>> > As a person from Asia, I do have some of pain points. Because of time
>> > differences, often, I have to do achieve searching since most of
>> > conversations happened while I am sleeping. IRC is not a good tool to
>> > perform "search backlog". Although there is message archive you can
>> dig,
>> > it is still hard. This is a problem. I do love to see any technical
>> > solution for me to efficiently and easily go through irc backlog, like
>> > most of modern chat tools.
>> >
>> > Secondly, IRC is not a popular one even in dev community here in Korea.
>> > In addition, in order to properly use irc, you need to do extra work,
>> > something like setting up bouncing server. I had to do google search to
>> > figure out how to use it.
>>
>> I think part of the disconnect here is that people have different ideas
>> about what IRC (and chat in general) is for.
>>
>> For me it's a way to conduct synchronous conversations. These tend to go
>> badly on the mailing list (really long threads of 1 sentence per
>> message) or on code review (have to keep refreshing), so it's good that
>> we have another tool to do this. I answer a lot of user questions,
>> clarify comments on patches, and obviously join team meetings in IRC.
>>
>> The key part is 

Re: [openstack-dev] [all] Zuul job backlog

2018-09-19 Thread Matt Riedemann

On 9/19/2018 2:45 PM, Matt Riedemann wrote:

Another one we need to make a decision on is:

https://bugs.launchpad.net/tempest/+bug/1783405

Which I'm suggesting we need to mark more slow tests with the actual 
"slow" tag in Tempest so they move to only be run in the tempest-slow 
job. gmann and I talked about this last week over IRC but I forgot to 
update the bug report with details. I think rather than increase the 
timeout of the tempest-full job we should be marking more slow tests as 
slow. Increasing timeouts gives some short-term relief but eventually we 
just have to look at these issues again, and a tempest run shouldn't 
take over 2 hours (remember when it used to take ~45 minutes?).


https://review.openstack.org/#/c/603900/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Nominating Ian Y. Choi for openstack-doc-core

2018-09-19 Thread Doug Hellmann
Excerpts from Petr Kovar's message of 2018-09-19 11:50:22 -0700:
> Hi all,
> 
> Based on our PTG discussion, I'd like to nominate Ian Y. Choi for
> membership in the openstack-doc-core team. I think Ian doesn't need an
> introduction, he's been around for a while, recently being deeply involved
> in infra work to get us robust support for project team docs translation and
> PDF builds. 
> 
> Having Ian on the core team will also strengthen our integration with
> the i18n community.
> 
> Please let the ML know should you have any objections.
> 
> Thanks,
> pk
> 

+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Zuul job backlog

2018-09-19 Thread Matt Riedemann

On 9/19/2018 2:11 PM, Clark Boylan wrote:

Unfortunately, right now our classification rate is very poor (only 15%), which 
makes it difficult to know what exactly is causing these failures. Mriedem and 
I have quickly scanned the unclassified list, and it appears there is a db 
migration testing issue causing these tests to timeout across several projects. 
Mriedem is working to get this classified and tracked which should help, but we 
will also need to fix the bug. On top of that it appears that Glance has flaky 
functional tests (both python2 and python3) which are causing resets and should 
be looked into.

If you'd like to help, let mriedem or myself know and we'll gladly work with 
you to get elasticsearch queries added to elastic-recheck. We are likely less 
help when it comes to fixing functional tests in Glance, but I'm happy to point 
people in the right direction for that as much as I can. If you can take a few 
minutes to do this before/after you issue a recheck it does help quite a bit.


Things have gotten bad enough that I've started proposing changes to 
skip particularly high failure rate tests that are not otherwise getting 
attention to help triage and fix the bugs. For example:


https://review.openstack.org/#/c/602649/

https://review.openstack.org/#/c/602656/

Generally this is a last resort since it means we're losing test 
coverage, but when we hit a critical mass of random failures it becomes 
extremely difficult to merge code.


Another one we need to make a decision on is:

https://bugs.launchpad.net/tempest/+bug/1783405

Which I'm suggesting we need to mark more slow tests with the actual 
"slow" tag in Tempest so they move to only be run in the tempest-slow 
job. gmann and I talked about this last week over IRC but I forgot to 
update the bug report with details. I think rather than increase the 
timeout of the tempest-full job we should be marking more slow tests as 
slow. Increasing timeouts gives some short-term relief but eventually we 
just have to look at these issues again, and a tempest run shouldn't 
take over 2 hours (remember when it used to take ~45 minutes?).


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Nominating Ian Y. Choi for openstack-doc-core

2018-09-19 Thread Michael Johnson
Also not a docs core, but fully support this nomination!

Michael

On Wed, Sep 19, 2018 at 12:25 PM Jay S Bryant  wrote:
>
>
>
> On 9/19/2018 1:50 PM, Petr Kovar wrote:
> > Hi all,
> >
> > Based on our PTG discussion, I'd like to nominate Ian Y. Choi for
> > membership in the openstack-doc-core team. I think Ian doesn't need an
> > introduction, he's been around for a while, recently being deeply involved
> > in infra work to get us robust support for project team docs translation and
> > PDF builds.
> >
> > Having Ian on the core team will also strengthen our integration with
> > the i18n community.
> >
> > Please let the ML know should you have any objections.
> Petr,
>
> Not a doc Core but wanted to add my support.  Agree he would be a great
> addition.  Appreciate all he does for i18n, docs and OpenStack!
>
> Jay
>
> > Thanks,
> > pk
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] Core status

2018-09-19 Thread Nate Johnston
On Wed, Sep 19, 2018 at 06:19:44PM +, Gary Kotton wrote:

> I have recently transitioned to a new role where I will be working on other 
> parts of OpenStack. Sadly I do not have the necessary cycles to maintain my 
> core responsibilities in the neutron community. Nonetheless I will continue 
> to be involved.

Thanks for everything you've done over the years, Gary.  I know I
learned a lot from your reviews back when I was a wee baby Neutron
developer.  Best of luck on what's next!

Nate

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Nominating Ian Y. Choi for openstack-doc-core

2018-09-19 Thread Jay S Bryant



On 9/19/2018 1:50 PM, Petr Kovar wrote:

Hi all,

Based on our PTG discussion, I'd like to nominate Ian Y. Choi for
membership in the openstack-doc-core team. I think Ian doesn't need an
introduction, he's been around for a while, recently being deeply involved
in infra work to get us robust support for project team docs translation and
PDF builds.

Having Ian on the core team will also strengthen our integration with
the i18n community.

Please let the ML know should you have any objections.

Petr,

Not a doc Core but wanted to add my support.  Agree he would be a great 
addition.  Appreciate all he does for i18n, docs and OpenStack!


Jay


Thanks,
pk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Zuul job backlog

2018-09-19 Thread Clark Boylan
Hello everyone,

You may have noticed there is a large Zuul job backlog and changes are not 
getting CI reports as quickly as you might expect. There are several factors 
interacting with each other to make this the case. The short version is that 
one of our clouds is performing upgrades and has been removed from service, and 
we have a large number of gate failures which cause things to reset and start 
over. We have fewer resources than normal and are using them inefficiently. 
Zuul is operating as expected.

Continue reading if you'd like to understand the technical details and find out 
how you can help make this better.

Zuul gates related projects in shared queues. Changes enter these queues and 
are ordered in a speculative future state that Zuul assumes will pass because 
multiple humans have reviewed the changes and said they are good (also they had 
to pass check testing first). Problems arise when tests fail forcing Zuul to 
evict changes from the speculative future state, build a new state, then start 
jobs over again for this new future.

Typically this doesn't happen often and we merge many changes at a time, 
quickly pushing code into our repos. Unfortunately, the results are painful 
when we fail often as we end up rebuilding future states and restarting jobs 
often. Currently we have the gate and release jobs set to the highest priority 
as well so they run jobs before other queues. This means the gate can starve 
other work if it is flaky. We've configured things this way because the gate is 
not supposed to be flaky since we've reviewed things and already passed check 
testing. One of the tools we have in place to make this less painful is each 
gate queue operates on a window that grows and shrinks similar to how TCP 
slowstart. As changes merge we increase the size of the window and when they 
fail to merge we decrease it. This reduces the size of the future state that 
must be rebuilt and retested on failure when things are persistently flaky.

The best way to make this better is to fix the bugs in our software, whether 
that is in the CI system itself or the software being tested. The first step in 
doing that is to identify and track the bugs that we are dealing with. We have 
a tool called elastic-recheck that does this using indexed logs from the jobs. 
The idea there is to go through the list of unclassified failures [0] and 
fingerprint them so that we can track them [1]. With that data available we can 
then prioritize fixing the bugs that have the biggest impact.

Unfortunately, right now our classification rate is very poor (only 15%), which 
makes it difficult to know what exactly is causing these failures. Mriedem and 
I have quickly scanned the unclassified list, and it appears there is a db 
migration testing issue causing these tests to timeout across several projects. 
Mriedem is working to get this classified and tracked which should help, but we 
will also need to fix the bug. On top of that it appears that Glance has flaky 
functional tests (both python2 and python3) which are causing resets and should 
be looked into.

If you'd like to help, let mriedem or myself know and we'll gladly work with 
you to get elasticsearch queries added to elastic-recheck. We are likely less 
help when it comes to fixing functional tests in Glance, but I'm happy to point 
people in the right direction for that as much as I can. If you can take a few 
minutes to do this before/after you issue a recheck it does help quite a bit.

One general thing I've found would be helpful is if projects can clean up the 
deprecation warnings in their log outputs. The persistent "WARNING you used the 
old name for a thing" messages make the logs large and much harder to read to 
find the actual failures.

As a final note this is largely targeted at the OpenStack Integrated gate 
(Nova, Glance, Cinder, Keystone, Swift, Neutron) since that appears to be 
particularly flaky at the moment. The Zuul behavior applies to other gate 
pipelines (OSA, Tripleo, Airship, etc) as does elastic-recheck and related 
tooling. If you find your particular pipeline is flaky I'm more than happy to 
help in that context as well.

[0] http://status.openstack.org/elastic-recheck/data/integrated_gate.html
[1] http://status.openstack.org/elastic-recheck/gate.html

Thank you,
Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs] Nominating Ian Y. Choi for openstack-doc-core

2018-09-19 Thread Andreas Jaeger

On 2018-09-19 20:50, Petr Kovar wrote:

Hi all,

Based on our PTG discussion, I'd like to nominate Ian Y. Choi for
membership in the openstack-doc-core team. I think Ian doesn't need an
introduction, he's been around for a while, recently being deeply involved
in infra work to get us robust support for project team docs translation and
PDF builds.

>

Having Ian on the core team will also strengthen our integration with
the i18n community.

Please let the ML know should you have any objections.


The opposite ;), heartly agree with adding him,

Andreas
--
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs] Nominating Ian Y. Choi for openstack-doc-core

2018-09-19 Thread Petr Kovar
Hi all,

Based on our PTG discussion, I'd like to nominate Ian Y. Choi for
membership in the openstack-doc-core team. I think Ian doesn't need an
introduction, he's been around for a while, recently being deeply involved
in infra work to get us robust support for project team docs translation and
PDF builds. 

Having Ian on the core team will also strengthen our integration with
the i18n community.

Please let the ML know should you have any objections.

Thanks,
pk

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Openstack-operators][neutron][fwaas] Removing FWaaS V1 in Stein

2018-09-19 Thread German Eichberger
All,



With the Stein release we will remove support for FWaaS V1 [1]. It has been 
marked deprecated since Liberty (2015)  and was an experimental API. It is 
being replaced with FWaaS V2 [2] which has been available since the Newton 
release.


What is Neutron FWaaS?

Firewall-as-a-Service is a neutron project which provides router (L3) and port 
(L2) firewalls to protect networks and vms. [3]


What is Neutron FWaaS V1?

FWaaS V1 was the first implementation of Firewall-as-a-Service and focused on 
the router port. This implementation has been ported to FWaaS V2.


What is FWaaS V2?

FWaaS V2 extends Firewall-as-a-Service to any neutron port - thus offering the 
same functionality as Security Groups but with a richer API (e.g. deny/reject 
traffic).


Why is FWaaS V1 being removed?

FWaaS V1 has been deprecated since 2015 and with FWaaS V2 being released for 
several cycles it is time to remove FWaaS V1.


How do I migrate?

Existing firewall policies and rules need to be recreated with FWaaS V2. At 
this point we don’t offer an automated migration tool.


[1] 
https://developer.openstack.org/api-ref/network/v2/#fwaas-v1-0-deprecated-fw-firewalls-firewall-policies-firewall-rules


[2] 
https://developer.openstack.org/api-ref/network/v2/#fwaas-v2-0-current-fwaas-firewall-groups-firewall-policies-firewall-rules


[3] https://www.youtube.com/watch?v=9Wkym4BeM4M


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [User-committee] [tc] Joint UC/TC Meeting

2018-09-19 Thread Doug Hellmann
Excerpts from Chris Dent's message of 2018-09-19 12:31:26 +0100:
> On Tue, 18 Sep 2018, Doug Hellmann wrote:
> 
> > [Redirecting this from the openstack-tc list to the -dev list.]
> > Excerpts from Melvin Hillsman's message of 2018-09-18 17:43:57 -0500:
> >> UC is proposing a joint UC/TC meeting at the end of the month say starting
> >> after Berlin to work more closely together. The last Monday of the month at
> >> 1pm US Central time is current proposal, throwing it out here now for
> >> feedback/discussion, so that would make the first one Monday, November
> >> 26th, 2018.
> 
> I agree that the UC and TC should work more closely together. If the
> best way to do that is to have a meeting then great, let's do it.
> We're you thinking IRC or something else?
> 
> But we probably need to resolve our ambivalence towards meetings. On
> Sunday at the PTG we discussed maybe going back to having a TC
> meeting but didn't realy decide (at least as far as I recall) and

My notes say that the chair needs to raise this after the election is
over, so that the newly elected TC members can have input into the
decision.

> didn't discuss in too much depth the reasons why we killed meetings
> in the first place. How would this meeting be different?

I definitely see the usefulness of more regular communication between
the two groups. As Chris points out, we've been trying to avoid
requiring formal synchronous meetings as much as possible, I'd like
to start by listing some of the topics we might discuss. Not all
of them will need to wait for a meeting, and some may be better
suited to an in-person meeting at the forum in Berlin. If we come
up with a list of topics that do make sense for an online discussion,
we can work out how best to handle that.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Core status

2018-09-19 Thread Gary Kotton
Hi,
I have recently transitioned to a new role where I will be working on other 
parts of OpenStack. Sadly I do not have the necessary cycles to maintain my 
core responsibilities in the neutron community. Nonetheless I will continue to 
be involved.
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [senlin] Senlin Monthly(ish) Newsletter Sep 2018

2018-09-19 Thread Duc Truong
HTML: 
https://dkt26111.wordpress.com/2018/09/19/senlin-monthlyish-newsletter-september-2018/

This is the inaugural Senlin monthly(ish) newsletter.  The goal of
the newsletter is to highlight happenings in the Senlin project.  If
you have any feedback or questions regarding the contents, please feel
free to reach out to me or anyone in the #senlin IRC channel.


News


* Senlin weekly meeting time was changed at the beginning of the
  Stein cycle to 5:30 UTC every Friday. Feel free to drop in.
* Two new core members were added to the Senlin project.  Welcome
  jucross and eandersson.
* One new stable reviewer was added for Senlin stable maintainance.
  Welcome chenyb4.
* Autoscaling forum is being proposed for the Berlin Summit
  
(http://lists.openstack.org/pipermail/openstack-dev/2018-September/134770.html).
  Add your comments/feedback to this etherpad:
  https://etherpad.openstack.org/p/autoscaling-integration-and-feedback


Blueprint Status


* Fail fast locked resource
  - https://blueprints.launchpad.net/senlin/+spec/fail-fast-locked-resource
  - Spec was approved and implementation is WIP.

* Multiple detection modes
  - https://blueprints.launchpad.net/senlin/+spec/multiple-detection-modes
  - Spec approval is pending (https://review.openstack.org/#/c/601471/).

* Fail-fast on cooldown for scaling operations
  - Waiting for blueprint/spec submission.

* OpenStackSDK support senlin function test
  - Waiting for blueprint submission.

* Senlin add support use limit return
  - Waiting for blueprint submission.

* Add zun driver in senlin, use zun manager container
  - Waiting for blueprint submission.


Community Goal Status
-

* Python 3
  - All patches by Python 3 goal champions for zuul migration,
  documentation and unit test changes have been merged.

* Upgrade Checkers
  - No work has started on this. If you like to help out with this
  task, please let me know.


Recently Merged Changes
---

* Bug# 174 was fixed (https://review.openstack.org/#/c/594643/)
* Improvements to node poll URL mode in health policy
  (https://review.openstack.org/#/c/588674/)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-09-19 Thread Lance Bragstad
johnsom (from octavia) had a good idea, which was to use the service types
that are defined already [0].

I like this for three reasons, specifically. First, it's already a known
convention for services that we can just reuse. Second, it includes a
spacing convention (e.g. load-balancer vs load_balancer). Third,
it's relatively short since it doesn't include "os" or "api".

So long as there isn't any objection to that, we can start figuring out how
we want to do the method and resource parts. I pulled some policies into a
place where I could try and query them for specific patterns and existing
usage [1]. With the representation that I have (nova, neutron, glance,
cinder, keystone, mistral, and octavia):

- *create* is favored over post (105 occurrences to 7)
- *list* is favored over get_all (74 occurrences to 28)
- *update* is favored over put/patch (91 occurrences to 10)

>From this perspective, using the HTTP method might be slightly redundant
for projects using the DocumentedRuleDefault object from oslo.policy since
it contains the URL and method for invoking the policy. It also might
differ depending on the service implementing the API (some might use put
instead of patch to update a resource). Conversely, using the HTTP method
in the policy name itself doesn't require use of DocumentedRuleDefault,
although its usage is still recommended.

Thoughts on using create, list, update, and delete as opposed to post, get,
put, patch, and delete in the naming convention?

[0] https://service-types.openstack.org/service-types.json
[1]
https://gist.github.com/lbragstad/5000b46f27342589701371c88262c35b#file-policy-names-yaml

On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad  wrote:

> If we consider dropping "os", should we entertain dropping "api", too? Do
> we have a good reason to keep "api"?
>
> I wouldn't be opposed to simple service types (e.g "compute" or
> "loadbalancer").
>
> On Sat, Sep 15, 2018 at 9:01 AM Morgan Fainberg 
> wrote:
>
>> I am generally opposed to needlessly prefixing things with "os".
>>
>> I would advocate to drop it.
>>
>>
>> On Fri, Sep 14, 2018, 20:17 Lance Bragstad  wrote:
>>
>>> Ok - yeah, I'm not sure what the history behind that is either...
>>>
>>> I'm mainly curious if that's something we can/should keep or if we are
>>> opposed to dropping 'os' and 'api' from the convention (e.g.
>>> load-balancer:loadbalancer:post as opposed to
>>> os_load-balancer_api:loadbalancer:post) and just sticking with the
>>> service-type?
>>>
>>> On Fri, Sep 14, 2018 at 2:16 PM Michael Johnson 
>>> wrote:
>>>
 I don't know for sure, but I assume it is short for "OpenStack" and
 prefixing OpenStack policies vs. third party plugin policies for
 documentation purposes.

 I am guilty of borrowing this from existing code examples[0].

 [0]
 http://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/policy-in-code.html

 Michael
 On Fri, Sep 14, 2018 at 8:46 AM Lance Bragstad 
 wrote:
 >
 >
 >
 > On Thu, Sep 13, 2018 at 5:46 PM Michael Johnson 
 wrote:
 >>
 >> In Octavia I selected[0] "os_load-balancer_api:loadbalancer:post"
 >> which maps to the "os--api::" format.
 >
 >
 > Thanks for explaining the justification, Michael.
 >
 > I'm curious if anyone has context on the "os-" part of the format?
 I've seen that pattern in a couple different projects. Does anyone know
 about its origin? Was it something we converted to our policy names because
 of API names/paths?
 >
 >>
 >>
 >> I selected it as it uses the service-type[1], references the API
 >> resource, and then the method. So it maps well to the API
 reference[2]
 >> for the service.
 >>
 >> [0]
 https://docs.openstack.org/octavia/latest/configuration/policy.html
 >> [1] https://service-types.openstack.org/
 >> [2]
 https://developer.openstack.org/api-ref/load-balancer/v2/index.html#create-a-load-balancer
 >>
 >> Michael
 >> On Wed, Sep 12, 2018 at 12:52 PM Tim Bell  wrote:
 >> >
 >> > So +1
 >> >
 >> >
 >> >
 >> > Tim
 >> >
 >> >
 >> >
 >> > From: Lance Bragstad 
 >> > Reply-To: "OpenStack Development Mailing List (not for usage
 questions)" 
 >> > Date: Wednesday, 12 September 2018 at 20:43
 >> > To: "OpenStack Development Mailing List (not for usage questions)"
 , OpenStack Operators <
 openstack-operat...@lists.openstack.org>
 >> > Subject: [openstack-dev] [all] Consistent policy names
 >> >
 >> >
 >> >
 >> > The topic of having consistent policy names has popped up a few
 times this week. Ultimately, if we are to move forward with this, we'll
 need a convention. To help with that a little bit I started an etherpad [0]
 that includes links to policy references, basic conventions *within* that
 service, and some examples of each. I got through quite a few projects 

Re: [openstack-dev] [Neutron] Removing external_bridge_name config option

2018-09-19 Thread Akihiro Motoki
Hi,

I would like to share some information to help the migration from
external_network_bridge.

The background of the deprecation is described in
https://bugs.launchpad.net/neutron/+bug/1491668

I also shared a slide to explain the detail.
https://www.slideshare.net/ritchey98/neutron-brex-is-now-deprecated-what-is-modern-way
Neutron: br-ex is now deprecated! what is modern way?

I hope these help you to push away the usage of external_network_bridge.

Thanks,
Akihiro Motoki (IRC: amotoki)

2018年9月19日(水) 23:02 Slawomir Kaplonski :
>
> Hi,
>
> Some time ago I proposed patch [1] to remove config option 
> „external_network_bridge”.
> This option was deprecated to removal in Ocata so I think it’s time to get 
> rid of it finally.
>
> There is quite many projects which still uses this option [2]. I will try to 
> propose patches for those projects to remove it also from there but if You 
> are maintainer of such project, it would be great if You can remove it. If 
> You would do it, please use same topic as is in [1] - it will allow me easier 
> track which projects already removed it.
> Thx a lot in advance for any help :)
>
> [1] https://review.openstack.org/#/c/567369
> [2] 
> http://codesearch.openstack.org/?q=external_network_bridge=nope==
>
> —
> Slawek Kaplonski
> Senior software engineer
> Red Hat
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [tc]Global Reachout Proposal

2018-09-19 Thread Melvin Hillsman
Regarding some web clients that are potentially useful

https://webchat.freenode.net/
  - Zane mentioned this already and I can say I tried/used it some time ago
until I opted for CLI/alternatives
https://riot.im (iOS and Android apps available along with online client)
  - i find it a bit sluggish at times, others have not, either way it is a
decent alternative
https://thelounge.chat/
  - have not tried it yet but looks promising especially self-hosted option
https://irccloud.com
  - what I currently use, I do believe it can be blocked, i am looking into
riot and thelounge tbh


On Wed, Sep 19, 2018 at 12:18 PM Zane Bitter  wrote:

> On 18/09/18 9:10 PM, Jaesuk Ahn wrote:
> > On Wed, Sep 19, 2018 at 5:30 AM Zane Bitter  > > wrote:
>
> Resotring the whole quote here because I accidentally sent the original
> to the -sigs list only and not the -dev list.
>
> >> As others have mentioned, I think this is diving into solutions when we
> haven't defined the problems. I know you mentioned it briefly in the PTG
> session, but that context never made it to the review or the mailing list.
> >>
> >> So AIUI the issue you're trying to solve here is that the TC members
> seem distant and inaccessible to Chinese contributors because we're not on
> the same social networks they are?
> >>
> >> Perhaps there are others too?
> >>
> >> Obvious questions to ask from there would be:
> >>
> >> - Whether this is the most important issue facing contributors from the
> APAC region
> >>
> >> - To what extent the proposed solution is expected to help
> >
> >
> > I do agree with Zane on the above point.
>
> For the record, I didn't express an opinion. I'm just pointing out what
> the questions are.
>
> > As one of OpenStack participants from Asia region, I will put my
> > personal opinion.
> > IRC and ML has been an unified and standard way of communication in
> > OpenStack Community, and that has been a good way to encourage "open
> > communication" on a unified method wherever you are from, or whatever
> > background you have. If the whole community start recognize some other
> > tools (say WeChat) as recommended alternative communication method
> > because there are many people there, ironically, it might be a way to
> > break "diversity" and "openness" we want to embrace.
> >
> > Using whatever social media (or tools) in a specific region due to any
> > reason is not a problem. Anyone is free to use anything. Only thing we
> > need to make sure is, if you want to communicate officially with the
> > whole community, there is a very well defined and unified way to do it.
> > This is currently IRC and ML. Some of Korean dev has difficulties to use
> > IRC. However, there is not a perfect tool out there in this world, and
> > we accept all the reason why the community selected IRC as official tool
> >
> > But, that being said, There are some things I am facing with IRC from
> > here in Korea
> >
> > As a person from Asia, I do have some of pain points. Because of time
> > differences, often, I have to do achieve searching since most of
> > conversations happened while I am sleeping. IRC is not a good tool to
> > perform "search backlog". Although there is message archive you can dig,
> > it is still hard. This is a problem. I do love to see any technical
> > solution for me to efficiently and easily go through irc backlog, like
> > most of modern chat tools.
> >
> > Secondly, IRC is not a popular one even in dev community here in Korea.
> > In addition, in order to properly use irc, you need to do extra work,
> > something like setting up bouncing server. I had to do google search to
> > figure out how to use it.
>
> I think part of the disconnect here is that people have different ideas
> about what IRC (and chat in general) is for.
>
> For me it's a way to conduct synchronous conversations. These tend to go
> badly on the mailing list (really long threads of 1 sentence per
> message) or on code review (have to keep refreshing), so it's good that
> we have another tool to do this. I answer a lot of user questions,
> clarify comments on patches, and obviously join team meetings in IRC.
>
> The key part is 'synchronous' though. If I'm not there, the conversation
> is not going to be synchronous. I don't run a bouncer, although I
> generally leave my computer running when I'm not working so you'll often
> (but not always) be able to ping me, and I'll usually look back to see
> if it was something important. Otherwise it's 50-50 whether I'll even
> bother to read scrollback, and certainly not for more than a couple of
> channels.
>
> Other people, however, have a completely different perspective: they
> want a place where they are guaranteed to be reachable at any time (even
> if they don't see it until later) and the entire record is always right
> there. I think Slack was built for those kinds of people. You would have
> to drag me kicking and screaming into Slack even if it weren't
> proprietary software.

Re: [openstack-dev] [Openstack-sigs] [tc]Global Reachout Proposal

2018-09-19 Thread Zane Bitter

On 18/09/18 9:10 PM, Jaesuk Ahn wrote:
On Wed, Sep 19, 2018 at 5:30 AM Zane Bitter > wrote:


Resotring the whole quote here because I accidentally sent the original 
to the -sigs list only and not the -dev list.



As others have mentioned, I think this is diving into solutions when we haven't 
defined the problems. I know you mentioned it briefly in the PTG session, but 
that context never made it to the review or the mailing list.

So AIUI the issue you're trying to solve here is that the TC members seem 
distant and inaccessible to Chinese contributors because we're not on the same 
social networks they are?

Perhaps there are others too?

Obvious questions to ask from there would be:

- Whether this is the most important issue facing contributors from the APAC 
region

- To what extent the proposed solution is expected to help



I do agree with Zane on the above point.


For the record, I didn't express an opinion. I'm just pointing out what 
the questions are.


As one of OpenStack participants from Asia region, I will put my 
personal opinion.
IRC and ML has been an unified and standard way of communication in 
OpenStack Community, and that has been a good way to encourage "open 
communication" on a unified method wherever you are from, or whatever 
background you have. If the whole community start recognize some other 
tools (say WeChat) as recommended alternative communication method 
because there are many people there, ironically, it might be a way to 
break "diversity" and "openness" we want to embrace.


Using whatever social media (or tools) in a specific region due to any 
reason is not a problem. Anyone is free to use anything. Only thing we 
need to make sure is, if you want to communicate officially with the 
whole community, there is a very well defined and unified way to do it. 
This is currently IRC and ML. Some of Korean dev has difficulties to use 
IRC. However, there is not a perfect tool out there in this world, and 
we accept all the reason why the community selected IRC as official tool


But, that being said, There are some things I am facing with IRC from 
here in Korea


As a person from Asia, I do have some of pain points. Because of time 
differences, often, I have to do achieve searching since most of 
conversations happened while I am sleeping. IRC is not a good tool to 
perform "search backlog". Although there is message archive you can dig, 
it is still hard. This is a problem. I do love to see any technical 
solution for me to efficiently and easily go through irc backlog, like 
most of modern chat tools.


Secondly, IRC is not a popular one even in dev community here in Korea. 
In addition, in order to properly use irc, you need to do extra work, 
something like setting up bouncing server. I had to do google search to 
figure out how to use it.


I think part of the disconnect here is that people have different ideas 
about what IRC (and chat in general) is for.


For me it's a way to conduct synchronous conversations. These tend to go 
badly on the mailing list (really long threads of 1 sentence per 
message) or on code review (have to keep refreshing), so it's good that 
we have another tool to do this. I answer a lot of user questions, 
clarify comments on patches, and obviously join team meetings in IRC.


The key part is 'synchronous' though. If I'm not there, the conversation 
is not going to be synchronous. I don't run a bouncer, although I 
generally leave my computer running when I'm not working so you'll often 
(but not always) be able to ping me, and I'll usually look back to see 
if it was something important. Otherwise it's 50-50 whether I'll even 
bother to read scrollback, and certainly not for more than a couple of 
channels.


Other people, however, have a completely different perspective: they 
want a place where they are guaranteed to be reachable at any time (even 
if they don't see it until later) and the entire record is always right 
there. I think Slack was built for those kinds of people. You would have 
to drag me kicking and screaming into Slack even if it weren't 
proprietary software.


I don't know where WeChat falls on that spectrum. But maybe part of the 
issue is that we're creating too high an expectation of what it means to 
participate in the community (e.g. if you're not going to set up a 
bouncer and be reachable 24/7 then you might as well not get involved at 
all - this is 100% untrue). I've seen several assertions, including in 
the review, that any decisions must be documented on the mailing list or 
IRC, and I'm not sure I agree. IMHO, any decisions should be documented 
on the mailing list, period.


I'd love to see more participation on the mailing list. Since it is 
asynchronous already it's somewhat friendlier to those in APAC time 
zones (although there are still issues, real or perceived, with 
decisions being reached before anyone on that side of the world has a 
chance to weigh in), and a 

Re: [openstack-dev] Nominating Tetsuro Nakamura for placement-core

2018-09-19 Thread Matt Riedemann

On 9/19/2018 10:25 AM, Chris Dent wrote:



I'd like to nominate Tetsuro Nakamura for membership in the
placement-core team. Throughout placement's development Tetsuro has
provided quality reviews; done the hard work of creating rigorous
functional tests, making them fail, and fixing them; and implemented
some of the complex functionality required at the persistence layer.
He's aware of and respects the overarching goals of placement and has
demonstrated pragmatism when balancing those goals against the
requirements of nova, blazar and other projects.

Please follow up with a +1/-1 to express your preference. No need to
be an existing placement core, everyone with an interest is welcome.


Soft +1 from me given I mostly have defer to those that work more 
closely with Tetsuro. I agree he's a solid contributor, works hard, 
finds issues, fixes them before being asked, etc. That's awesome. 
Reminds me a lot of gibi when we nominated him.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Super fun unshelve image_ref bugs

2018-09-19 Thread Matt Riedemann

On 12/1/2017 2:47 PM, Matt Riedemann wrote:
Andrew Laski also mentioned in IRC that we didn't replace the original 
instance.image_ref with the shelved image id because the shelve 
operation should be transparent to the end user, they have the same 
image (not really), same volumes, same IPs, etc once they unshelve. And 
he mentioned that if you rebuild, for example, you'd then rebuild to the 
original image instead of the shelved snapshot image.


I'm not sure how much I agree with that rebuild argument. I understand 
it, but I'm not sure I agree with it. I think it's much easier to just 
track things for what they are, which means saying if you create a guest 
from a given image id, then track that in the instances table, don't lie 
about it being something else.


Dredging this back up since it will affect cross-cell resize which will 
rely on shelve/unshelve.


I had a thought recently (and noted in 
https://bugs.launchpad.net/nova/+bug/1732428) that the RequestSpec 
points at the original image used to create the server, or last rebuild 
it (if the server was rebuilt with a new image). What if we used that 
during rebuilds rather than the instance.image_ref?


Then unshelve could leave the instance.image_ref pointing at the shelve 
snapshot image (since that's what is actually backing the server at the 
time of unshelve and should fix the resize qcow2 bug linked above) but 
rebuild could still rebuild from the original (or last rebuild) image 
rather than the shelve snapshot image?


The only hiccup I'm aware of is we then still need to *not* delete the 
snapshot image on unshelve that the instance is pointing at, which means 
shelve snapshot images could pile up over time, especially with 
cross-cell resize. Is that a problem? If so, could we have a periodic 
that cleans up the old snapshot images based on some configured value?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][edge] Notes from the PTG

2018-09-19 Thread Jay Pipes

On 09/19/2018 11:03 AM, Jim Rollenhagen wrote:
On Wed, Sep 19, 2018 at 8:49 AM, Jim Rollenhagen > wrote:


Tuesday: edge


Since cdent asked in IRC, when we talk about edge and far edge, we 
defined these roughly like this:

https://usercontent.irccloud-cdn.com/file/NunkkS2y/edge_architecture1.JPG


Far out, man.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nominating Tetsuro Nakamura for placement-core

2018-09-19 Thread Jay Pipes

On 09/19/2018 11:25 AM, Chris Dent wrote:

I'd like to nominate Tetsuro Nakamura for membership in the
placement-core team. Throughout placement's development Tetsuro has
provided quality reviews; done the hard work of creating rigorous
functional tests, making them fail, and fixing them; and implemented
some of the complex functionality required at the persistence layer.
He's aware of and respects the overarching goals of placement and has
demonstrated pragmatism when balancing those goals against the
requirements of nova, blazar and other projects.

Please follow up with a +1/-1 to express your preference. No need to
be an existing placement core, everyone with an interest is welcome.


+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Berlin Forum Proposals

2018-09-19 Thread Gorka Eguileor
On 19/09, Jay S Bryant wrote:
> Gorka,
>
> Oh man!  Sorry for the duplication.  I will update the link on the Forum
> page if you are able to move your content over.  Think it will confused
> people less if we use the page I most recently sent out.  Does that make
> sense?
>
Hi Jay,

Yup, it makes sense.

I moved the contents and updated the wiki to point to your etherpad.

> Thanks for catching this mistake!
>

It was my mistake for not mentioning the existing etherpad during the
PTG... XD

Cheers,
Gorka.


> Jay
>
>
> On 9/19/2018 4:42 AM, Gorka Eguileor wrote:
> > On 18/09, Jay S Bryant wrote:
> > > Team,
> > >
> > > I have created an etherpad for our Forum Topic Planning:
> > > https://etherpad.openstack.org/p/cinder-berlin-forum-proposals
> > >
> > > Please add your ideas to the etherpad.  Thank you!
> > >
> > > Jay
> > >
> > Hi Jay,
> >
> > After our last IRC meeting, a couple of weeks ago, I created an etherpad
> > [1] and added it to the Forum wiki [2] (though I failed to mention it).
> >
> > I had added a possible topic to this etherpad [1], but I can move it to
> > yours and update the wiki if you like.
> >
> > Cheers,
> > Gorka.
> >
> >
> > [1]: https://etherpad.openstack.org/p/cinder-forum-stein
> > [2]: https://wiki.openstack.org/wiki/Forum/Berlin2018
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nominating Tetsuro Nakamura for placement-core

2018-09-19 Thread Eric Fried
+1

On 09/19/2018 10:25 AM, Chris Dent wrote:
> 
> 
> I'd like to nominate Tetsuro Nakamura for membership in the
> placement-core team. Throughout placement's development Tetsuro has
> provided quality reviews; done the hard work of creating rigorous
> functional tests, making them fail, and fixing them; and implemented
> some of the complex functionality required at the persistence layer.
> He's aware of and respects the overarching goals of placement and has
> demonstrated pragmatism when balancing those goals against the
> requirements of nova, blazar and other projects.
> 
> Please follow up with a +1/-1 to express your preference. No need to
> be an existing placement core, everyone with an interest is welcome.
> 
> Thanks.
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Nominating Tetsuro Nakamura for placement-core

2018-09-19 Thread Ed Leafe
On Sep 19, 2018, at 10:25 AM, Chris Dent  wrote:
> 
> I'd like to nominate Tetsuro Nakamura for membership in the
> placement-core team. 

I’m not a core, but if I were, I’d +1 that.


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Berlin Forum Proposals

2018-09-19 Thread Jay S Bryant

Gorka,

Oh man!  Sorry for the duplication.  I will update the link on the Forum 
page if you are able to move your content over.  Think it will confused 
people less if we use the page I most recently sent out.  Does that make 
sense?


Thanks for catching this mistake!

Jay


On 9/19/2018 4:42 AM, Gorka Eguileor wrote:

On 18/09, Jay S Bryant wrote:

Team,

I have created an etherpad for our Forum Topic Planning:
https://etherpad.openstack.org/p/cinder-berlin-forum-proposals

Please add your ideas to the etherpad.  Thank you!

Jay


Hi Jay,

After our last IRC meeting, a couple of weeks ago, I created an etherpad
[1] and added it to the Forum wiki [2] (though I failed to mention it).

I had added a possible topic to this etherpad [1], but I can move it to
yours and update the wiki if you like.

Cheers,
Gorka.


[1]: https://etherpad.openstack.org/p/cinder-forum-stein
[2]: https://wiki.openstack.org/wiki/Forum/Berlin2018



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Nominating Tetsuro Nakamura for placement-core

2018-09-19 Thread Chris Dent



I'd like to nominate Tetsuro Nakamura for membership in the
placement-core team. Throughout placement's development Tetsuro has
provided quality reviews; done the hard work of creating rigorous
functional tests, making them fail, and fixing them; and implemented
some of the complex functionality required at the persistence layer.
He's aware of and respects the overarching goals of placement and has
demonstrated pragmatism when balancing those goals against the
requirements of nova, blazar and other projects.

Please follow up with a +1/-1 to express your preference. No need to
be an existing placement core, everyone with an interest is welcome.

Thanks.

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
reenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][edge] Notes from the PTG

2018-09-19 Thread Jim Rollenhagen
On Wed, Sep 19, 2018 at 8:49 AM, Jim Rollenhagen 
wrote:
>
> Tuesday: edge
>

Since cdent asked in IRC, when we talk about edge and far edge, we defined
these roughly like this:
https://usercontent.irccloud-cdn.com/file/NunkkS2y/edge_architecture1.JPG

// jim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper"

2018-09-19 Thread Monty Taylor

On 09/19/2018 09:37 AM, Colleen Murphy wrote:

On Wed, Sep 19, 2018, at 4:23 PM, Monty Taylor wrote:

On 09/19/2018 08:25 AM, Chris Dent wrote:





also, cmurphy has been working on updating some of keystone's legacy
jobs recently:

https://review.openstack.org/602452

which might also be a source for copying from.



Disclaimer before anyone blindly copies: https://bit.ly/2vq26SR


Bah. Blindly copy all the things!!!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic][edge] Notes from the PTG

2018-09-19 Thread Jim Rollenhagen
I wrote up some notes from my perspective at the PTG for some internal
teams and figured I may as well share them here. They're primarily from the
ironic and edge WG rooms. Fairly raw, very long, but hopefully useful to
someone. Enjoy.

Tuesday: edge

Edge WG (IMHO) has historically just talked about use cases, hand-waved a
bit, and jumped to requiring an autonomous control plane per edge site -
thus spending all of their time talking about how they will make glance and
keystone sync data between control planes.

penick described roughly what we do with keystone/athenz and how that can
be used in a federated keystone deployment to provide autonomy for any
control plane, but also a single view via a global keystone.

penick and I both kept pushing for people to define a real architecture,
and we ended up with 10-15 people huddled around an easel for most of the
afternoon. Of note:

- Windriver (and others?) refuse to budge on the many control plane thing
- This means that they will need some orchestration tooling up top in
the main DC / client machines to even come close to reasonably managing all
of these sites
- They will probably need some syncing tooling
- glance->glance isn’t a thing, no matter how many people say it is.
- Glance PTL recommends syncing metadata outside of glance process, and
a global(ly distributed?) glance backend.
- We also defined the single pane of glass architecture that Oath plans to
deploy
- Okay with losing connectivity from central control plane to single
edge site
- Each edge site is a cell
- Each far edge site is just compute nodes
- Still may want to consider image distribution to edge sites so we
don’t have to go back to main DC?
- Keystone can be distributed the same as first architecture
- Nova folks may start investigating putting API hosts at the cell
level to get the best of both worlds - if there’s a network partition, can
still talk to cell API to manage things
- Need to think about removing the need for rabbitmq between edge and
far edge
- Kafka was suggested in the edge room for oslo.messaging in general
- Etcd watchers may be another option for an o.msg driver
- Other other options are more invasive into nova - involve
changing how nova-compute talks to conductor (etcd, etc) or even putting
REST APIs in nova-compute (and nova-conductor?)
- Neutron is going to work on an OVS “superagent” - superagent does
the RPC handling, talks some other way to child agents. Intended to scale
to thousands of children. Primary use case is smart nics but seems like a
win for the edge case as well.

penick took an action item to draw up the architecture diagrams in a
digestable format.

Wednesday: ironic things

Started with a retrospective. See
https://etherpad.openstack.org/p/ironic-stein-ptg-retrospective for the
notes - there wasn’t many surprising things here. We did discuss trying to
target some quick wins for the beginning of the cycle, so that we didn’t
have all of our features trying to land at the end. Using wsgi with the
ironic-api was mentioned as a potential regression, but we agreed it’s a
config/documentation issue. I took an action to make a task to document
this better.

Next we quickly reviewed our vision doc, and people didn’t have much to say
about it.

Metalsmith: it’s a thing, it’s being included into the ironic project.
Dmitry is open to optionally supporting placement. Multiple instances will
be a feature in the future. Otherwise mostly feature complete, goal is to
keep it simple.

Networking-ansible: redhat building tooling that integrates with upstream
ansible modules for networking gear. Kind of an alternative to n-g-s. Not
really much on plans here, RH just wanted to introduce it to the community.
Some discussion about it possibly replacing n-g-s later, but no hard plans.

Deploy steps/templates: we talked about what the next steps are, and what
an MVP looks like. Deploy templates are triggered by the traits that nodes
are scheduled against, and can add steps before or after (or in between?)
the default deploy steps. We agreed that we should add a RAID deploy step,
with standing questions for how arguments are passed to that deploy step,
and what the defaults look like. Myself and mgoddard took an action item to
open an RFE for this. We also agreed that we should start thinking about
how the current (only) deploy step should be split into multiple steps.

Graphical console: we discussed what the next steps are for this work. We
agreed that we should document the interface and what is returned (a URL),
and also start working on a redfish driver for graphical consoles. We also
noted that we can test in the gate with qemu, but we only need to test that
a correct URL is returned, not that the console actually works (because we
don’t really care that qemu’s console works).

Python 3: we talked about the changes to our jobs that are needed. We
agreed to use the base name of the jobs 

Re: [openstack-dev] [nova] When can/should we change additionalProperties=False in GET /servers(/detail)?

2018-09-19 Thread Matt Riedemann

On 9/18/2018 12:26 PM, Matt Riedemann wrote:

On 9/17/2018 9:41 PM, Ghanshyam Mann wrote:
   On Tue, 18 Sep 2018 09:33:30 +0900 Alex Xu  
wrote 
  > That only means after 599276 we only have servers API and 
os-instance-action API stopped accepting the undefined query parameter.
  > What I'm thinking about is checking all the APIs, add 
json-query-param checking with additionalProperties=True if the API 
don't have yet. And using another microversion set 
additionalProperties to False, then the whole Nova API become consistent.


I too vote for doing it for all other API together. Restricting the 
unknown query or request param are very useful for API consistency. 
Item#1 in this etherpadhttps://etherpad.openstack.org/p/nova-api-cleanup


If you would like, i can propose a quick spec for that and positive 
response to do all together then we skip to do that in 599276 
otherwise do it for GET servers in 599276.


-gmann


I don't care too much about changing all of the other 
additionalProperties=False in a single microversion given we're already 
kind of inconsistent with this in a few APIs. Consistency is ideal, but 
I thought we'd be lumping in other cleanups from the etherpad into the 
same microversion/spec which will likely slow it down during spec 
review. For example, I'd really like to get rid of the weird server 
response field prefixes like "OS-EXT-SRV-ATTR:". Would we put those into 
the same mass cleanup microversion / spec or split them into individual 
microversions? I'd prefer not to see an explosion of microversions for 
cleaning up oddities in the API, but I could see how doing them all in a 
single microversion could be complicated.


Just an update on https://review.openstack.org/#/c/599276/ - the change 
is approved. We left additionalProperties=True in the GET 
/servers(/detail) APIs for consistency with 2.5 and 2.26, and for 
expediency in just getting the otherwise pretty simple change approved.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper"

2018-09-19 Thread Colleen Murphy
On Wed, Sep 19, 2018, at 4:23 PM, Monty Taylor wrote:
> On 09/19/2018 08:25 AM, Chris Dent wrote:
> > 

> also, cmurphy has been working on updating some of keystone's legacy 
> jobs recently:
> 
> https://review.openstack.org/602452
> 
> which might also be a source for copying from.
> 

Disclaimer before anyone blindly copies: https://bit.ly/2vq26SR

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper"

2018-09-19 Thread Monty Taylor

On 09/19/2018 09:23 AM, Monty Taylor wrote:

On 09/19/2018 08:25 AM, Chris Dent wrote:


I have a patch in progress to add some simple integration tests to
placement:

 https://review.openstack.org/#/c/601614/

They use https://github.com/cdent/gabbi-tempest . The idea is that
the method for adding more tests is to simply add more yaml in
gate/gabbits, without needing to worry about adding to or think
about tempest.

What I have at that patch works; there are two yaml files, one of
which goes through the process of confirming the existence of a
resource provider and inventory, booting a server, seeing a change
in allocations, resizing the server, seeing a change in allocations.

But this is kludgy in a variety of ways and I'm hoping to get some
help or pointers to the right way. I'm posting here instead of
asking in IRC as I assume other people confront these same
confusions. The issues:

* The associated playbooks are cargo-culted from stuff labelled
   "legacy" that I was able to find in nova's jobs. I get the
   impression that these are more verbose and duplicative than they
   need to be and are not aligned with modern zuul v3 coolness.


Yes. Your life will be much better if you do not make more legacy jobs. 
They are brittle and hard to work with.


New jobs should either use the devstack base job, the devstack-tempest 
base job or the devstack-tox-functional base job - depending on what 
things are intended.


You might want to check out:

https://docs.openstack.org/devstack/latest/zuul_ci_jobs_migration.html

also, cmurphy has been working on updating some of keystone's legacy 
jobs recently:


https://review.openstack.org/602452

which might also be a source for copying from.


* It takes an age for the underlying devstack to build, I can
   presumably save some time by installing fewer services, and making
   it obvious how to add more when more are required. What's the
   canonical way to do this? Mess with {enable,disable}_service, cook
   the ENABLED_SERVICES var, do something with required_projects?


http://git.openstack.org/cgit/openstack/openstacksdk/tree/.zuul.yaml#n190

Has an example of disabling services, of adding a devstack plugin, and 
of adding some lines to localrc.



http://git.openstack.org/cgit/openstack/openstacksdk/tree/.zuul.yaml#n117

Has some more complex config bits in it.

In your case, I believe you want to have parent: devstack-tempest 
instead of parent: devstack-tox-functional




* This patch, and the one that follows it [1] dynamically install
   stuff from pypi in the post test hooks, simply because that was
   the quick and dirty way to get those libs in the environment.
   What's the clean and proper way? gabbi-tempest itself needs to be
   in the tempest virtualenv.


This I don't have an answer for. I'm guessing this is something one 
could do with a tempest plugin?


K. This:

http://git.openstack.org/cgit/openstack/neutron-tempest-plugin/tree/.zuul.yaml#n184

Has an example of a job using a tempest plugin.


* The post.yaml playbook which gathers up logs seems like a common
   thing, so I would hope could be DRYed up a bit. What's the best
   way to that?


Yup. Legacy devstack-gate based jobs are pretty terrible.

You can delete the entire post.yaml if you move to the new devstack base 
job.


The base devstack job has a much better mechanism for gathering logs.


Thanks very much for any input.

[1] perf logging of a loaded placement: 
https://review.openstack.org/#/c/602484/




__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper"

2018-09-19 Thread Monty Taylor

On 09/19/2018 08:25 AM, Chris Dent wrote:


I have a patch in progress to add some simple integration tests to
placement:

     https://review.openstack.org/#/c/601614/

They use https://github.com/cdent/gabbi-tempest . The idea is that
the method for adding more tests is to simply add more yaml in
gate/gabbits, without needing to worry about adding to or think
about tempest.

What I have at that patch works; there are two yaml files, one of
which goes through the process of confirming the existence of a
resource provider and inventory, booting a server, seeing a change
in allocations, resizing the server, seeing a change in allocations.

But this is kludgy in a variety of ways and I'm hoping to get some
help or pointers to the right way. I'm posting here instead of
asking in IRC as I assume other people confront these same
confusions. The issues:

* The associated playbooks are cargo-culted from stuff labelled
   "legacy" that I was able to find in nova's jobs. I get the
   impression that these are more verbose and duplicative than they
   need to be and are not aligned with modern zuul v3 coolness.


Yes. Your life will be much better if you do not make more legacy jobs. 
They are brittle and hard to work with.


New jobs should either use the devstack base job, the devstack-tempest 
base job or the devstack-tox-functional base job - depending on what 
things are intended.


You might want to check out:

https://docs.openstack.org/devstack/latest/zuul_ci_jobs_migration.html

also, cmurphy has been working on updating some of keystone's legacy 
jobs recently:


https://review.openstack.org/602452

which might also be a source for copying from.


* It takes an age for the underlying devstack to build, I can
   presumably save some time by installing fewer services, and making
   it obvious how to add more when more are required. What's the
   canonical way to do this? Mess with {enable,disable}_service, cook
   the ENABLED_SERVICES var, do something with required_projects?


http://git.openstack.org/cgit/openstack/openstacksdk/tree/.zuul.yaml#n190

Has an example of disabling services, of adding a devstack plugin, and 
of adding some lines to localrc.



http://git.openstack.org/cgit/openstack/openstacksdk/tree/.zuul.yaml#n117

Has some more complex config bits in it.

In your case, I believe you want to have parent: devstack-tempest 
instead of parent: devstack-tox-functional




* This patch, and the one that follows it [1] dynamically install
   stuff from pypi in the post test hooks, simply because that was
   the quick and dirty way to get those libs in the environment.
   What's the clean and proper way? gabbi-tempest itself needs to be
   in the tempest virtualenv.


This I don't have an answer for. I'm guessing this is something one 
could do with a tempest plugin?



* The post.yaml playbook which gathers up logs seems like a common
   thing, so I would hope could be DRYed up a bit. What's the best
   way to that?


Yup. Legacy devstack-gate based jobs are pretty terrible.

You can delete the entire post.yaml if you move to the new devstack base 
job.


The base devstack job has a much better mechanism for gathering logs.


Thanks very much for any input.

[1] perf logging of a loaded placement: 
https://review.openstack.org/#/c/602484/




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [chef] fog-openstack 0.3

2018-09-19 Thread Samuel Cassiba
Ohai!

fog-openstack 0.3 has been released upstream, but it also seems to be
a breaking release by way of naming convention.

At this time, it is advised to pin your client cookbook at '<0.3.0'.
Changes to compensate for this change are being delivered to git and
Supermarket, but the most immediate workaround is to pin.

Once things are working with fog-openstack 0.3, ChefDK will pick the
new version up in a later release.

Thank you for your attention.

-scas

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Removing external_bridge_name config option

2018-09-19 Thread Slawomir Kaplonski
Hi,

Some time ago I proposed patch [1] to remove config option 
„external_network_bridge”.
This option was deprecated to removal in Ocata so I think it’s time to get rid 
of it finally.

There is quite many projects which still uses this option [2]. I will try to 
propose patches for those projects to remove it also from there but if You are 
maintainer of such project, it would be great if You can remove it. If You 
would do it, please use same topic as is in [1] - it will allow me easier track 
which projects already removed it.
Thx a lot in advance for any help :)

[1] https://review.openstack.org/#/c/567369
[2] 
http://codesearch.openstack.org/?q=external_network_bridge=nope==

— 
Slawek Kaplonski
Senior software engineer
Red Hat


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cyborg] Weekly meeting canceled for this week

2018-09-19 Thread Li Liu
Hi team,

The Cyborg weekly meeting today is canceled as most of the folks in China
are still fighting with jet lag from Denver.

-- 
Thank you

Regards

Li
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper"

2018-09-19 Thread Chris Dent


I have a patch in progress to add some simple integration tests to
placement:

https://review.openstack.org/#/c/601614/

They use https://github.com/cdent/gabbi-tempest . The idea is that
the method for adding more tests is to simply add more yaml in
gate/gabbits, without needing to worry about adding to or think
about tempest.

What I have at that patch works; there are two yaml files, one of
which goes through the process of confirming the existence of a
resource provider and inventory, booting a server, seeing a change
in allocations, resizing the server, seeing a change in allocations.

But this is kludgy in a variety of ways and I'm hoping to get some
help or pointers to the right way. I'm posting here instead of
asking in IRC as I assume other people confront these same
confusions. The issues:

* The associated playbooks are cargo-culted from stuff labelled
  "legacy" that I was able to find in nova's jobs. I get the
  impression that these are more verbose and duplicative than they
  need to be and are not aligned with modern zuul v3 coolness.

* It takes an age for the underlying devstack to build, I can
  presumably save some time by installing fewer services, and making
  it obvious how to add more when more are required. What's the
  canonical way to do this? Mess with {enable,disable}_service, cook
  the ENABLED_SERVICES var, do something with required_projects?

* This patch, and the one that follows it [1] dynamically install
  stuff from pypi in the post test hooks, simply because that was
  the quick and dirty way to get those libs in the environment.
  What's the clean and proper way? gabbi-tempest itself needs to be
  in the tempest virtualenv.

* The post.yaml playbook which gathers up logs seems like a common
  thing, so I would hope could be DRYed up a bit. What's the best
  way to that?

Thanks very much for any input.

[1] perf logging of a loaded placement: https://review.openstack.org/#/c/602484/

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Forum Topic Submission Period

2018-09-19 Thread Jimmy McArthur



Sylvain Bauza wrote:



Le mer. 19 sept. 2018 à 00:41, Jimmy McArthur > a écrit :

SNIP



Same as I do :-) Unrelated point, for the first time in all the 
Summits I know, I wasn't able to know the track chairs for a specific 
track. Ideally, I'd love to reach them in order to know what they 
disliked in my proposal.
They were listed on an Etherpad that was listed under Presentation 
Selection Process in the CFP navigation. That has since been overwritten 
w/ Forum Selection Process, so let me try to dig that up.  We publish 
the Track Chairs every year.



SNIP


I have another question, do you know why we can't propose a Forum 
session with multiple speakers ? Is this a bug or an expected 
behaviour ? In general, there is only one moderator for a Forum 
session, but in the past, I clearly remember we had some sessions that 
were having multiple moderators (for various reasons).
Correct. Forum sessions aren't meant to have speakers like a normal 
presentation. They are all set up parliamentary style w/ one or more 
moderators. However, the moderator can manage the room any way they'd 
like.  If you want to promote the people that will be in the room, this 
can be added to the abstract.


-Sylvain


Cheers,
Jimmy




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Searchlight] vPTG tomorrow

2018-09-19 Thread Trinh Nguyen
Hi team,

As we agreed on team's channel, tomorrow we will have our vPTG for Stein.
Please see below for details:

   - Time: 12:00~14:00 UTC, 20th Sep.
   - Meeting channel: https://hangouts.google.com/group/7PeiryADgQvyweoF3
   - Etherpad: https://etherpad.openstack.org/p/searchlight-stein-ptg

Bests,

*Trinh Nguyen *| Founder & Chief Architect



*E:* dangtrin...@gmail.com | *W:* *www.edlab.xyz *
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [User-committee] [tc] Joint UC/TC Meeting

2018-09-19 Thread Chris Dent

On Tue, 18 Sep 2018, Doug Hellmann wrote:


[Redirecting this from the openstack-tc list to the -dev list.]
Excerpts from Melvin Hillsman's message of 2018-09-18 17:43:57 -0500:

UC is proposing a joint UC/TC meeting at the end of the month say starting
after Berlin to work more closely together. The last Monday of the month at
1pm US Central time is current proposal, throwing it out here now for
feedback/discussion, so that would make the first one Monday, November
26th, 2018.


I agree that the UC and TC should work more closely together. If the
best way to do that is to have a meeting then great, let's do it.
We're you thinking IRC or something else?

But we probably need to resolve our ambivalence towards meetings. On
Sunday at the PTG we discussed maybe going back to having a TC
meeting but didn't realy decide (at least as far as I recall) and
didn't discuss in too much depth the reasons why we killed meetings
in the first place. How would this meeting be different?

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] Are we ready to put stable/ocata into extended maintenance mode?

2018-09-19 Thread Dmitry Tantsur

On 9/18/18 9:27 PM, Matt Riedemann wrote:
The release page says Ocata is planned to go into extended maintenance mode on 
Aug 27 [1]. There really isn't much to this except it means we don't do releases 
for Ocata anymore [2]. There is a caveat that project teams that do not wish to 
maintain stable/ocata after this point can immediately end of life the branch 
for their project [3]. We can still run CI using tags, e.g. if keystone goes 
ocata-eol, devstack on stable/ocata can still continue to install from 
stable/ocata for nova and the ocata-eol tag for keystone. Having said that, if 
there is no undue burden on the project team keeping the lights on for 
stable/ocata, I would recommend not tagging the stable/ocata branch end of life 
at this point.


So, questions that need answering are:

1. Should we cut a final release for projects with stable/ocata branches before 
going into extended maintenance mode? I tend to think "yes" to flush the queue 
of backports. In fact, [3] doesn't mention it, but the resolution said we'd tag 
the branch [4] to indicate it has entered the EM phase.


Some ironic projects have outstanding changes, I guess we should release them.



2. Are there any projects that would want to skip EM and go directly to EOL (yes 
this feels like a Monopoly question)?


[1] https://releases.openstack.org/
[2] 
https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases 

[3] 
https://docs.openstack.org/project-team-guide/stable-branches.html#extended-maintenance 

[4] 
https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html#end-of-life 






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] GUI for Swift object storage

2018-09-19 Thread M Ranga Swami Reddy
Hi Tim - Thanks for sharing the details.

Thanks
Swami
On Tue, Sep 18, 2018 at 10:05 PM Tim Burke  wrote:
>
> Hate to nitpick, but Cyberduck is licensed GPLv3 -- you can browse the source 
> (and confirm the license) at https://trac.cyberduck.io/browser/trunk and 
> https://trac.cyberduck.io/ indicates the source is available via git or svn. 
> They do nag you to donate, though.
>
> Swift explorer is Apache 2, available at 
> https://github.com/roikku/swift-explorer. I don't know anything about 
> Gladinet.
>
> Tim
>
> On Sep 17, 2018, at 7:31 PM, M Ranga Swami Reddy  wrote:
>
> All GUI tools are non open source...need to pay like cyberduck etc.
> Looking for open source GUI for Swift API access.
>
> On Tue, 18 Sep 2018, 06:41 John Dickinson,  wrote:
>>
>> That's a great question.
>>
>> A quick google search shows a few like Swift Explorer, Cyberduck, and 
>> Gladinet. But since Swift supports the S3 API (check with your cluster 
>> operator to see if this is enabled, or examine the results of a GET /info 
>> request), you can use any available S3 GUI client as well (as long as you 
>> can configure the endpoints you connect to).
>>
>> --John
>>
>> On 17 Sep 2018, at 16:48, M Ranga Swami Reddy wrote:
>>
>> Hi - is there any GUI (open source) available for Swift objects storage?
>>
>> Thanks
>> Swa
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] GUI for Swift object storage

2018-09-19 Thread M Ranga Swami Reddy
Hi Clay - Thanks for sharing the details.
On Tue, Sep 18, 2018 at 7:09 PM Clay Gerrard  wrote:
>
> I don't know about a good open source cross-platform GUI client, but the 
> SwiftStack one is slick and doesn't seem to be behind a paywall (yet?)
>
> https://www.swiftstack.com/downloads
>
> There's probably some proprietary integration that won't make sense - but it 
> should work with any Swift end-point.  Let me know how it goes!
>
> -Clay
>
> N.B. IANAL, so you should probably double check the license/terms if you're 
> planning on doing anything more sophisticated than personal use.
>
> On Mon, Sep 17, 2018 at 9:31 PM M Ranga Swami Reddy  
> wrote:
>>
>> All GUI tools are non open source...need to pay like cyberduck etc.
>> Looking for open source GUI for Swift API access.
>>
>> On Tue, 18 Sep 2018, 06:41 John Dickinson,  wrote:
>>>
>>> That's a great question.
>>>
>>> A quick google search shows a few like Swift Explorer, Cyberduck, and 
>>> Gladinet. But since Swift supports the S3 API (check with your cluster 
>>> operator to see if this is enabled, or examine the results of a GET /info 
>>> request), you can use any available S3 GUI client as well (as long as you 
>>> can configure the endpoints you connect to).
>>>
>>> --John
>>>
>>> On 17 Sep 2018, at 16:48, M Ranga Swami Reddy wrote:
>>>
>>> Hi - is there any GUI (open source) available for Swift objects storage?
>>>
>>> Thanks
>>> Swa
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg][cinder] Stein PTG Summary Page Ready ...

2018-09-19 Thread Gorka Eguileor
On 18/09, Jay S Bryant wrote:
> Team,
>
> I have put together the following page with a summary of all our discussions
> at the PTG: https://wiki.openstack.org/wiki/CinderSteinPTGSummary
>
> Please review the contents and let me know if anything needs to be changed.
>
> Jay
>
>

Hi Jay,

Thank you for the great summary, it looks great.

After reading it, I can't think of anything that's missing.

Cheers,
Gorka.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Berlin Forum Proposals

2018-09-19 Thread Gorka Eguileor
On 18/09, Jay S Bryant wrote:
> Team,
>
> I have created an etherpad for our Forum Topic Planning:
> https://etherpad.openstack.org/p/cinder-berlin-forum-proposals
>
> Please add your ideas to the etherpad.  Thank you!
>
> Jay
>

Hi Jay,

After our last IRC meeting, a couple of weeks ago, I created an etherpad
[1] and added it to the Forum wiki [2] (though I failed to mention it).

I had added a possible topic to this etherpad [1], but I can move it to
yours and update the wiki if you like.

Cheers,
Gorka.


[1]: https://etherpad.openstack.org/p/cinder-forum-stein
[2]: https://wiki.openstack.org/wiki/Forum/Berlin2018

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Forum Topic Submission Period

2018-09-19 Thread Sylvain Bauza
Le mer. 19 sept. 2018 à 00:41, Jimmy McArthur  a
écrit :

> Hey Matt,
>
>
> Matt Riedemann wrote:
> >
> > Just a process question.
>
> Good question.
> > I submitted a presentation for the normal marketing blitz part of the
> > summit which wasn't accepted (I'm still dealing with this emotionally,
> > btw...)
>


Same as I do :-) Unrelated point, for the first time in all the Summits I
know, I wasn't able to know the track chairs for a specific track. Ideally,
I'd love to reach them in order to know what they disliked in my proposal.



> If there's anything I can do...
> > but when I look at the CFP link for Forum topics, my thing shows up
> > there as "Received" so does that mean my non-Forum-at-all submission
> > is now automatically a candidate for the Forum because that would not
> > be my intended audience (only suits and big wigs please).
> Forum Submissions would be considered separate and non-Forum submissions
> will not be considered for the Forum. The submission process is based on
> the track you submit to and, in the case of the Forum, we separate this
> track out from the rest of the submission process.
>
> If you think there is still something funky, send me a note via
> speakersupp...@openstack.org or ji...@openstack.org and I'll work
> through it with you.
>
>
I have another question, do you know why we can't propose a Forum session
with multiple speakers ? Is this a bug or an expected behaviour ? In
general, there is only one moderator for a Forum session, but in the past,
I clearly remember we had some sessions that were having multiple
moderators (for various reasons).

-Sylvain


Cheers,
> Jimmy
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev