Re: [openstack-dev] [openstack-operators] [qa] [berlin] QA Team & related sessions at berlin summit

2018-11-10 Thread Ghanshyam Mann
Hello Everyone,

I have created the below etherpads to use during QA Forum sessions:

- Users / Operators adoption of QA tools:  
https://etherpad.openstack.org/p/BER-qa-ops-user-feedback 
- QA Onboarding: https://etherpad.openstack.org/p/BER-qa-onboarding-vancouver

-gmann

  On Fri, 09 Nov 2018 11:02:54 +0900 Ghanshyam Mann 
 wrote  
 > Hello everyone, 
 >  
 > Along with project updates & onboarding sessions, QA team will host QA 
 > feedback sessions in berlin summit.  Feel free to catch us next week for any 
 > QA related questions or if you need help to contribute in QA (we are really 
 > looking forward to onbaord new contributor in QA).  
 >  
 > Below are the QA related sessions, feel free to append the list if i missed 
 > anything. I am working on onboarding/forum sessions etherpad and will send 
 > the link tomorrow.  
 >  
 > Tuesday: 
 >   1. OpenStack QA - Project Update.   [1] 
 >   2. OpenStack QA - Project Onboarding.   [2] 
 >   3. OpenStack Patrole – Foolproofing your OpenStack Deployment [3] 
 >  
 > Wednesday: 
 >   4. Forum: Users / Operators adoption of QA tools / plugins.  [4] 
 >  
 > Thursday: 
 >   5. Using Rally/Tempest for change validation (OPS session) [5] 
 >  
 > [1] 
 > https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22763/openstack-qa-project-update
 >   
 > [2] 
 > https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22762/openstack-qa-project-onboarding
 >  
 > [3] 
 > https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22148/openstack-patrole-foolproofing-your-openstack-deployment
 >  
 > [4] 
 > https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22788/users-operators-adoption-of-qa-tools-plugins
 >   
 > [5] 
 > https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22837/using-rallytempest-for-change-validation-ops-session
 >   
 >  
 > -gmann 
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-operators] [qa] [berlin] QA Team & related sessions at berlin summit

2018-11-08 Thread Ghanshyam Mann
Hello everyone,

Along with project updates & onboarding sessions, QA team will host QA feedback 
sessions in berlin summit.  Feel free to catch us next week for any QA related 
questions or if you need help to contribute in QA (we are really looking 
forward to onbaord new contributor in QA). 

Below are the QA related sessions, feel free to append the list if i missed 
anything. I am working on onboarding/forum sessions etherpad and will send the 
link tomorrow. 

Tuesday:
  1. OpenStack QA - Project Update.   [1]
  2. OpenStack QA - Project Onboarding.   [2]
  3. OpenStack Patrole – Foolproofing your OpenStack Deployment [3]

Wednesday:
  4. Forum: Users / Operators adoption of QA tools / plugins.  [4]

Thursday:
  5. Using Rally/Tempest for change validation (OPS session) [5]

[1] 
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22763/openstack-qa-project-update
 
[2] 
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22762/openstack-qa-project-onboarding
[3] 
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22148/openstack-patrole-foolproofing-your-openstack-deployment
[4] 
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22788/users-operators-adoption-of-qa-tools-plugins
 
[5] 
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22837/using-rallytempest-for-change-validation-ops-session
 

-gmann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][all] TC office hours is started now on #openstack-tc

2018-11-08 Thread Ghanshyam Mann
Hi All, 

TC office hour is started on #openstack-tc channel. Feel free to reach to us 
for anything you want discuss/input/feedback/help from TC. 

- TC 





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][qa] Migrating devstack jobs to Bionic (Ubuntu LTS 18.04)

2018-11-06 Thread Ghanshyam Mann
  On Wed, 07 Nov 2018 07:07:33 +0900 Clark Boylan  
wrote  
 > On Tue, Nov 6, 2018, at 2:02 PM, Ghanshyam Mann wrote:
 > > Thanks Jens.
 > > 
 > > As most of the base jobs are in QA repo, QA team will coordinate this 
 > > migration based on either of the approach mentioned below. 
 > > 
 > > Another point to note - This migration will only target the zuulv3 jobs 
 > > not the legacy jobs. legacy jobs owner should migrate them to bionic 
 > > when they will be moved to zuulv3 native. Any risk of keeping the legacy 
 > > on xenial till zullv3 ?
 > > 
 > > Tempest testing patch found that stable queens/pike jobs failing for 
 > > bionic due to not supported distro in devstack[1]. Fixing in  
 > > https://review.openstack.org/#/c/616017/ and will backport to pike too.
 > 
 > The existing stable branches should continue to test on xenial as that is 
 > what they were built on. We aren't asking that everything be ported forward 
 > to bionic. Instead the idea is that current development (aka master) switch 
 > to bionic and roll forward from that point.

Make sense. Thanks. We can keep stable branch jobs running on Tempest master on 
xenial only. 

-gmann

 > 
 > This applies to tempest jobs, functional jobs, and unittests, etc. Xenial 
 > isn't going away. It is there for the stable branches.
 > 
 > > 
 > > [1]  https://review.openstack.org/#/c/611572/
 > > 
 > > http://logs.openstack.org/72/611572/1/check/tempest-full-queens/7cd3f21/job-output.txt.gz#_2018-11-01_09_57_07_551538
 > >  
 > > 
 > > 
 > > -gmann
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][qa] Migrating devstack jobs to Bionic (Ubuntu LTS 18.04)

2018-11-06 Thread Ghanshyam Mann



  On Wed, 07 Nov 2018 06:51:32 +0900 Slawomir Kaplonski 
 wrote  
 > Hi,
 > 
 > > Wiadomość napisana przez Jeremy Stanley  w dniu 
 > > 06.11.2018, o godz. 22:25:
 > > 
 > > On 2018-11-06 22:05:49 +0100 (+0100), Slawek Kaplonski wrote:
 > > [...]
 > >> also add jobs like "devstack-xenial" and "tempest-full-xenial"
 > >> which projects can use still for some time if their job on Bionic
 > >> would be broken now?
 > > [...]
 > > 
 > > That opens the door to piecemeal migration, which (as we similarly
 > > saw during the Trusty to Xenial switch) will inevitably lead to
 > > projects who no longer gate on Xenial being unable to integration
 > > test against projects who don't yet support Bionic. At the same
 > > time, projects which have switched to Bionic will start merging
 > > changes which only work on Bionic without realizing it, so that
 > > projects which test on Xenial can't use them. In short, you'll be
 > > broken either way. On top of that, you can end up with projects that
 > > don't get around to switching completely before release comes, and
 > > then they're stuck having to manage a test platform transition on a
 > > stable branch.
 > 
 > I understand Your point here but will option 2) from first email lead to the 
 > same issues then?

seems so. approach 1 is less risky for such integrated testing issues and 
requires less work. In approach 1, we can coordinate the base job migration 
with project side testing with bionic.

-gmann

 > 
 > > -- 
 > > Jeremy Stanley
 > > __
 > > OpenStack Development Mailing List (not for usage questions)
 > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 
 > — 
 > Slawek Kaplonski
 > Senior software engineer
 > Red Hat
 > 
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][qa] Migrating devstack jobs to Bionic (Ubuntu LTS 18.04)

2018-11-06 Thread Ghanshyam Mann
Thanks Jens.

As most of the base jobs are in QA repo, QA team will coordinate this migration 
based on either of the approach mentioned below. 

Another point to note - This migration will only target the zuulv3 jobs not the 
legacy jobs. legacy jobs owner should migrate them to bionic when they will be 
moved to zuulv3 native. Any risk of keeping the legacy on xenial till zullv3 ?

Tempest testing patch found that stable queens/pike jobs failing for bionic due 
to not supported distro in devstack[1]. Fixing in  
https://review.openstack.org/#/c/616017/ and will backport to pike too.

[1]  https://review.openstack.org/#/c/611572/

http://logs.openstack.org/72/611572/1/check/tempest-full-queens/7cd3f21/job-output.txt.gz#_2018-11-01_09_57_07_551538
 


-gmann
  On Tue, 06 Nov 2018 21:12:55 +0900 Dr. Jens Harbott (frickler) 
 wrote  
 > Dear OpenStackers,
 > 
 > earlier this year Ubuntu released their current LTS version 18.04
 > codenamed "Bionic Beaver" and we are now facing the task to migrate
 > our devstack-based jobs to run on Bionic instead of the previous LTS
 > version 16.04 "Xenial Xerus".
 > 
 > The last time this has happened two years ago (migration from 14.04 to
 > 16.04) and at that time it seems the migration was mostly driven by
 > the Infra team (see [1]), mostly because all of the job configuration
 > was still centrally hosted in a single repository
 > (openstack-infra/project-config). In the meantime, however, our CI
 > setup has been updated to use Zuul v3 and one of the new features that
 > come with this development is the introduction of per-project job
 > definitions.
 > 
 > So this new flexibility requires us to make a choice between the two
 > possible options we have for migrating jobs now:
 > 
 > 1) Change the "devstack" base job to run on Bionic instances
 > instead of Xenial instances
 > 2) Create new "devstack-bionic" and "tempest-full-bionic" base
 > jobs and migrate projects piecewise
 > 
 > Choosing option 1) would cause all projects that base their own jobs
 > on this job (possibly indirectly by e.g. being based on the
 > "tempest-full" job) switch automatically. So there would be the
 > possibility that some jobs would break and require to be fixed before
 > patches could be merged again in the affected project(s). To
 > accomodate those risks, QA team can give some time to projects to test
 > their jobs on Bionic with WIP patches (QA can provide Bionic base job
 > as WIP patch). This option does not require any pre/post migration
 > changes on project's jobs.
 > 
 > Choosing option 2) would avoid this by letting projects switch at
 > their own pace, but create the risk that some projects would never
 > migrate. It would also make further migrations, like the one expected
 > to happen when 20.04 is released, either having to follow the same
 > scheme or re-introduce the unversioned base job. Other point to note
 > down with this option is,
 >- project job definitions need to change their parent job from
 > "devstack" -> "devstack-bionic" or "tempest-full" ->
 > "tempest-full-bionic"
 >  - QA needs to maintain existing jobs ("devstack", "tempest-full") and
 > bionic version jobs ("devstack-bionic", "tempest-full-bionic")
 > 
 > In order to prepare the decision, we have created a set of patches
 > that test the Bionic
 > jobs, you can find them under the common topic "devstack-bionic" [2].
 > There is also an
 > etherpad to give a summarized view of the results of these tests [3].
 > 
 > Please respond to this mail if you want to promote either of the above
 > options or
 > maybe want to propose an even better solution. You can also find us
 > for discussion
 > in the #openstack-qa IRC channel on freenode.
 > 
 > Infra team has tried both approaches during precise->trusty &
 > trusty->xenial migration[4].
 > 
 > Note that this mailing-list itself will soon be migrated, too, so if
 > you haven't subscribed
 > to the new list yet, this is a good time to act and avoid missing the
 > best parts [5].
 > 
 > Yours,
 > Jens (frickler@IRC)
 > 
 > 
 > [1] 
 > http://lists.openstack.org/pipermail/openstack-dev/2016-November/106906.html
 > [2] https://review.openstack.org/#/q/topic:devstack-bionic
 > [3] https://etherpad.openstack.org/p/devstack-bionic
 > [4] 
 > http://eavesdrop.openstack.org/irclogs/%23openstack-qa/%23openstack-qa.2018-11-01.log.html#t2018-11-01T12:40:22
 > [5] 
 > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134911.html
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

[openstack-dev] [tc][all] TC office hours is started now on #openstack-tc

2018-11-06 Thread Ghanshyam Mann
Hi All, 

TC office hour is started on #openstack-tc channel. Feel free to reach to us 
for anything you want discuss/input/feedback/help from TC. 

-gmann





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] 2019 summit during May holidays?

2018-11-05 Thread Ghanshyam Mann
  On Tue, 06 Nov 2018 05:50:03 +0900 Dmitry Tantsur  
wrote  
 > 
 > 
 > On Mon, Nov 5, 2018, 20:07 Julia Kreger  *removes all of the hats*
 > *removes years of dust from unrelated event planning hat, and puts it on for 
 > a moment*
 > 
 > In my experience, events of any nature where convention venue space is 
 > involved, are essentially set in stone before being publicly advertised as 
 > contracts are put in place for hotel room booking blocks as well as the 
 > convention venue space. These spaces are also typically in a relatively high 
 > demand limiting the access and available times to schedule. Often venues 
 > also give preference (and sometimes even better group discounts) to repeat 
 > events as they are typically a known entity and will have somewhat known 
 > needs so the venue and hotel(s) can staff appropriately. 
 > 
 > tl;dr, I personally wouldn't expect any changes to be possible at this point.
 > 
 > *removes event planning hat of past life, puts personal scheduling hat on*
 > I imagine that as a community, it is near impossible to schedule something 
 > avoiding holidays for everyone in the community.
 > 
 > I'm not taking about everyone. And I'm mostly fine with my holiday, but the 
 > conflicts with Russia and Japan seem huge. This certainly does not help our 
 > effort to engage people outside of NA/EU.
 > Quick googling suggests that the week of May 13th would have much fewer 
 > conflicts.
 > 
 > I personally have lost count of the number of holidays and special days that 
 > I've spent on business trips over the past four years. While I may be an 
 > out-lier in my feelings on this subject, I'm not upset, annoyed, or even 
 > bitter about lost times. This community is part of my family.
 > 
 > Sure :)
 > But outside of our small nice circle there is a huge world of people who may 
 > not share our feeling and the level of commitment to openstack. These 
 > occasional contributors we talked about when discussing the cycle length. I 
 > don't think asking them to abandon 3-5 days of holidays is a productive way 
 > to engage them.
 > And again, as much as I love meeting you all, I think we're outgrowing the 
 > format of these meetings..
 > Dmitry

Yeah, in case of Japan it is full week holiday starting from April 29th. I 
remember most of the May summits did not conflict with Golden week but this is. 
I am not sure if any solution to this now but we should consider such things in 
future. 

-gmann

 > 
 > -Julia
 > 
 > On Mon, Nov 5, 2018 at 8:19 AM Dmitry Tantsur  wrote:
 > Hi all,
 >  
 >  Not sure how official the information about the next summit is, but it's on 
 > the 
 >  web site [1], so I guess worth asking..
 >  
 >  Are we planning for the summit to overlap with the May holidays? The 1st of 
 > May 
 >  is a holiday in big part of the world. We ask people to skip it in addition 
 > to 
 >  3+ weekend days they'll have to spend working and traveling.
 >  
 >  To make it worse, 1-3 May are holidays in Russia this time. To make it even 
 >  worse than worse, the week of 29th is the Golden Week in Japan [2]. Was it 
 >  considered? Is it possible to move the days to less conflicting time 
 > (mid-May 
 >  maybe)?
 >  
 >  Dmitry
 >  
 >  [1] https://www.openstack.org/summit/denver-2019/
 >  [2] https://en.wikipedia.org/wiki/Golden_Week_(Japan)
 >  
 >  __
 >  OpenStack Development Mailing List (not for usage questions)
 >  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 >  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >   __
 >  OpenStack Development Mailing List (not for usage questions)
 >  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 >  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >   __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Sharing upstream contribution mentoring result with Korea user group

2018-10-31 Thread Ghanshyam Mann
That's great job Ian and team. It is really great when local user groups doing 
so much effort for upstream contribution mentoring. 

From FirstContact SIG point of view, feel free to let us know any help you need 
in term of engaging new contributors with their interesting projects team and 
working items.

-gmann


  On Tue, 30 Oct 2018 23:10:42 +0900 Ian Y. Choi  
wrote  
 > Hello,
 > 
 > I got involved organizing & mentoring Korean people for OpenStack 
 > upstream contribution for about last two months,
 > and would like to share with community members.
 > 
 > Total nine mentees had started to learn OpenStack, contributed, and 
 > finally survived as volunteers for
 >   1) developing OpenStack mobile app for better mobile user interfaces 
 > and experiences
 >  (inspired from https://github.com/stackerz/app which worked on Juno 
 > release), and
 >   2) translating OpenStack official project artifacts including documents,
 >   and Container Whitepaper ( 
 > https://www.openstack.org/containers/leveraging-containers-and-openstack/ ).
 > 
 > Korea user group organizers (Seongsoo Cho, Taehee Jang, Hocheol Shin, 
 > Sungjin Kang, and Andrew Yongjoon Kong)
 > all helped to organize total 8 offline meetups + one mini-hackathon and 
 > mentored to attendees.
 > 
 > The followings are brief summary:
 >   - "OpenStack Controller" Android app is available on Play Store
 >: 
 > https://play.google.com/store/apps/details?id=openstack.contributhon.com.openstackcontroller
 > (GitHub: https://github.com/kosslab-kr/openstack-controller )
 > 
 >   - Most high-priority projects (although it is not during string freeze 
 > period) and documents are
 > 100% translated into Korean: Horizon, OpenStack-Helm, I18n Guide, 
 > and Container Whitepaper.
 > 
 >   - Total 18,695 words were translated into Korean by four contributors
 >(confirmed through Zanata API: 
 > https://translate.openstack.org/rest/stats/user/[Zanata 
 > ID]/2018-08-16..2018-10-25 ):
 > 
 > ++---+-+
 > | Zanata ID  | Name  | Number of words |
 > ++---+-+
 > | ardentpark | Soonyeul Park | 12517   |
 > ++---+-+
 > | bnitech| Dongbim Im| 693 |
 > ++---+-+
 > | csucom | Sungwook Choi | 4397|
 > ++---+-+
 > | jaeho93| Jaeho Cho | 1088|
 > ++---+-+
 > 
 >   - The list of projects translated into Korean are described as:
 > 
 > +-+-+
 > | Project | Number of words |
 > +-+-+
 > | api-site| 20  |
 > +-+-+
 > | cinder  | 405 |
 > +-+-+
 > | designate-dashboard | 4   |
 > +-+-+
 > | horizon | 3226|
 > +-+-+
 > | i18n| 434 |
 > +-+-+
 > | ironic  | 4   |
 > +-+-+
 > | Leveraging Containers and OpenStack | 5480|
 > +-+-+
 > | neutron-lbaas-dashboard | 5   |
 > +-+-+
 > | openstack-helm  | 8835|
 > +-+-+
 > | trove-dashboard | 89  |
 > +-+-+
 > | zun-ui  | 193 |
 > +-+-+
 > 
 > I would like to really appreciate all co-mentors and participants on 
 > such a big event for promoting OpenStack contribution.
 > The venue and food were supported by Korea Open Source Software 
 > Development Center ( https://kosslab.kr/ ).
 > 
 > 
 > With many thanks,
 > 
 > /Ian
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] Neutron stadium project Tempest plugins

2018-10-31 Thread Ghanshyam Mann
  On Wed, 24 Oct 2018 05:08:11 +0900 Slawomir Kaplonski 
 wrote  
 > Hi,
 > 
 > Thx Miguel for raising this.
 > List of tempest plugins is on 
 > https://docs.openstack.org/tempest/latest/plugin-registry.html - if URL for 
 > Your plugin is the same as Your main repo, You should move Your tempest 
 > plugin code.

Thanks mlavalle, slaweq for bringing up this discussion. 

Separating the Tempest plugin from service repo was Queens goal and that goal 
clearly state the benefit of having the separate plugins repo [1]. For Neturon, 
that goal was marked as Complete after creating the neutron-temepst-plugin[2] 
and work to separate the neutron stadium project's tempest plugins was left 
out. I think many of the projects did not all. 

This came up while discussing the tempest plugins CI setup [3]. If you need any 
help from QA team, feel free to ping us on #openstack-qa channel. 


[1] https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html
[2] https://review.openstack.org/#/c/524605/
[3] 
https://etherpad.openstack.org/p/tempest-plugins-ci-release-tagging-clarification


-gmann

 > 
 > 
 > > Wiadomość napisana przez Miguel Lavalle  w dniu 
 > > 23.10.2018, o godz. 16:59:
 > > 
 > > Dear Neutron Stadium projects,
 > > 
 > > In a QA session during the recent PTG in Denver, it was suggested that the 
 > > Stadium projects should move their Tempest plugins to a repository of 
 > > their own or added to the Neutron Tempest plugin repository 
 > > (https://github.com/openstack/neutron-tempest-plugin). The purpose of this 
 > > message is to start a conversation for the Stadium projects to indicate 
 > > what is their preference. Please respond to this thread indicating how do 
 > > you want to move forward.
 > > 
 > > Best regards
 > > 
 > > Miguel
 > > __
 > > OpenStack Development Mailing List (not for usage questions)
 > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 
 > — 
 > Slawek Kaplonski
 > Senior software engineer
 > Red Hat
 > 
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][all] TC office hours is started now on #openstack-tc

2018-10-30 Thread Ghanshyam Mann
Hi All, 

TC office hour is started on #openstack-tc channel. Feel free to reach to us 
for anything you want discuss/input/feedback/help from TC. 

-gmann 





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][all] TC office hours is started now on #openstack-tc

2018-10-30 Thread Ghanshyam Mann
Hi All, 

TC office hour is started on #openstack-tc channel. Feel free to reach to us 
for anything you want discuss/input/feedback/help from TC. 

-gmann





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][all] TC office hours is started now on #openstack-tc

2018-10-23 Thread Ghanshyam Mann
Hi All, 

TC office hour is started on #openstack-tc channel. Feel free to reach to us 
for anything you want discuss/input/feedback/help from TC. 

-gmann 





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][all] TC office hours is started now on #openstack-tc

2018-10-23 Thread Ghanshyam Mann
Hi All, 

TC office hour is started on #openstack-tc channel. Feel free to reach to us 
for anything you want discuss/input/feedback/help from TC. 

-gmann 





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] patrole] Nominating Sergey Vilgelm and Mykola Yakovliev for Patrole core

2018-10-22 Thread Ghanshyam Mann
+1 for both of them. They have been doing great work in Patrole and will be 
good addition in team. 

-gmann


  On Tue, 23 Oct 2018 03:34:51 +0900 MONTEIRO, FELIPE C  
wrote  
 >   
 > Hi,
 >   
 >  I would like to nominate Sergey Vilgelm and Mykola Yakovliev for Patrole 
 > core as they have both done excellent work the past cycle in improving the 
 > Patrole framework as well as increasing Neutron Patrole test coverage, which 
 > includes various  Neutron plugins/extensions as well like fwaas. I believe 
 > they will both make an excellent addition to the Patrole core team.
 >   
 >  Please vote with a +1/-1 for the nomination, which will stay open for one 
 > week.
 >   
 >  Felipe
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][all] TC office hours is started now on #openstack-tc

2018-10-18 Thread Ghanshyam Mann
Hi All, 

TC office hour is started on #openstack-tc channel. Feel free to reach to us 
for anything you want discuss/input/feedback/help from TC.  

-gmann 






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] API updates week 18-42

2018-10-18 Thread Ghanshyam Mann
Hi All, 

Please find the Nova API highlights of this week. 

Weekly Office Hour: 
=== 

What we discussed this week: 
- Discussed about API extensions works. 
- Discussed on 2 new bugs which needs more log for further debugging. added bug 
comments. 

Planned Features : 
== 
Below are the API related features for Stein. Ref - 
https://etherpad.openstack.org/p/stein-nova-subteam-tracking (feel free to add 
API item there if you are working or found any). NOTE: sequence order are not 
the priority, they are listed as per their start date. 

1. API Extensions merge work 
- https://blueprints.launchpad.net/nova/+spec/api-extensions-merge-stein 
- 
https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:bp/api-extensions-merge-stein+status:open
 
- Weekly Progress: last patch has +2 and other has +A and on gate. 

2. Handling a down cell 
- https://blueprints.launchpad.net/nova/+spec/handling-down-cell 
- 
https://review.openstack.org/#/q/topic:bp/handling-down-cell+(status:open+OR+status:merged)
 
- Weekly Progress: tssurya has updated the patches on this. can we get this in 
runway ?

3. Servers Ips non-unique network names : 
- 
https://blueprints.launchpad.net/nova/+spec/servers-ips-non-unique-network-names
 
- Spec Merged 
- 
https://review.openstack.org/#/q/topic:bp/servers-ips-non-unique-network-names+(status:open+OR+status:merged)
 
- Weekly Progress: No progress. 

4. Volume multiattach enhancements: 
- https://blueprints.launchpad.net/nova/+spec/volume-multiattach-enhancements 
- 
https://review.openstack.org/#/q/topic:bp/volume-multiattach-enhancements+(status:open+OR+status:merged)
 
- Weekly Progress: No progress. 

5. Boot instance specific storage backend 
- 
https://blueprints.launchpad.net/nova/+spec/boot-instance-specific-storage-backend
 
- 
https://review.openstack.org/#/q/topic:bp/boot-instance-specific-storage-backend+(status:open+OR+status:merged)
 
- Weekly Progress: COMPLETED

6. Add API ref guideline for body text (takashin) 
- https://review.openstack.org/#/c/605628/ 
- Weekly Progress: Reviewed most of the patches. 

Specs: 
7. Detach and attach boot volumes 
- 
https://review.openstack.org/#/q/topic:bp/detach-boot-volume+(status:open+OR+status:merged)
 
- Weekly Progress: under review. Kevin has updated the spec with review comment 
fix. 

8. Nova API policy updates 
https://blueprints.launchpad.net/nova/+spec/granular-api-policy 
Spec: https://review.openstack.org/#/c/547850/ 
- Weekly Progress: No progress in this. first concentrating on its dependency 
on 'consistent policy name' - https://review.openstack.org/#/c/606214/ 

9. Nova API cleanup 
https://blueprints.launchpad.net/nova/+spec/api-consistency-cleanup 
Spec: https://review.openstack.org/#/c/603969/ 
- Weekly Progress: No progress on this. I will update this with all cleanup in 
next week. 

10. Support deleting data volume when destroy instance(Brin Zhang) 
- https://review.openstack.org/#/c/580336/ 
- Weekly Progress: No Progress. 

Bugs: 
 
This week Bug Progress: 
https://etherpad.openstack.org/p/nova-api-weekly-bug-report 

Critical: 0->0 
High importance: 2->1 
By Status: 
New: 4->2
Confirmed/Triage: 32-> 32 
In-progress: 35->36 
Incomplete: 3->5 
= 
Total: 74->75 

NOTE- There might be some bug which are not tagged as 'api' or 'api-ref', those 
are not in above list. Tag such bugs so that we can keep our eyes. 

-gmann 







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][all] TC Office hour time

2018-10-16 Thread Ghanshyam Mann
Hi All,

TC office hour is started on #openstack-tc channel & many of TC members (may be 
not all due to TZ) gather for next 1 hour to discuss any topic from community. 
Feel free to reach to us for anything you want discuss/input/feedback/help from 
TC.

-gmann 





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [goals][upgrade-checkers] Call for Volunteers to work on upgrade-checkers stein goal

2018-10-16 Thread Ghanshyam Mann
Hi All,

I was discussing with mriedem [1] about idea of building a volunteer team which 
can work with him on upgrade-checkers goal [2]. There are lot of work needed 
for this goal[3], few projects which does not have upgrade impact yet needs CLI 
framework with placeholder only and other projects with upgrade impact need 
actual upgrade checks implementation.

Idea is to build the volunteer team who can work with goal champion to finish 
the work early. This will help to share some work from goal champion as well 
from project side.

 - This email is request to call for volunteers (upstream developers from any 
projects) who can work closely with mriedem on upgrade-checkers goal.
 - Currently two developers has volunteered.  
1. Akhil Jain (IRC: akhil_jain, email: akhil.j...@india.nec.com) 
2. Rajat Dhasmana (IRC: whoami-rajat email: rajatdhasm...@gmail.com)
 - Anyone who would like to help on this work, feel free to reply this email or 
ping mriedem  on IRC. 
 - As next step, mriedem will plan the work distribution to volunteers. 

[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-10-16.log.html#t2018-10-16T13:37:59
 
[2] https://governance.openstack.org/tc/goals/stein/upgrade-checkers.html
[3] https://storyboard.openstack.org/#!/story/2003657

-gmann 





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [goals][upgrade-checkers] Week R-26 Update

2018-10-16 Thread Ghanshyam Mann
  On Sat, 13 Oct 2018 07:05:53 +0900 Matt Riedemann  
wrote  
 > The big update this week is version 0.1.0 of oslo.upgradecheck was 
 > released. The documentation along with usage examples can be found here 
 > [1]. A big thanks to Ben Nemec for getting that done since a few 
 > projects were waiting for it.
 > 
 > In other updates, some changes were proposed in other projects [2].
 > 
 > And finally, Lance Bragstad and I had a discussion this week [3] about 
 > the validity of upgrade checks looking for deleted configuration 
 > options. The main scenario I'm thinking about here is FFU where someone 
 > is going from Mitaka to Pike. Let's say a config option was deprecated 
 > in Newton and then removed in Ocata. As the operator is rolling through 
 > from Mitaka to Pike, they might have missed the deprecation signal in 
 > Newton and removal in Ocata. Does that mean we should have upgrade 
 > checks that look at the configuration for deleted options, or options 
 > where the deprecated alias is removed? My thought is that if things will 
 > not work once they get to the target release and restart the service 
 > code, which would definitely impact the upgrade, then checking for those 
 > scenarios is probably OK. If on the other hand the removed options were 
 > just tied to functionality that was removed and are otherwise not 
 > causing any harm then I don't think we need a check for that. It was 
 > noted that oslo.config has a new validation tool [4] so that would take 
 > care of some of this same work if run during upgrades. So I think 
 > whether or not an upgrade check should be looking for config option 
 > removal ultimately depends on the severity of what happens if the manual 
 > intervention to handle that removed option is not performed. That's 
 > pretty broad, but these upgrade checks aren't really set in stone for 
 > what is applied to them. I'd like to get input from others on this, 
 > especially operators and if they would find these types of checks useful.
 > 
 > [1] https://docs.openstack.org/oslo.upgradecheck/latest/
 > [2] https://storyboard.openstack.org/#!/story/2003657
 > [3] 
 > http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2018-10-10.log.html#t2018-10-10T15:17:17
 > [4] 
 > http://lists.openstack.org/pipermail/openstack-dev/2018-October/135688.html

Other point is about policy change and how we should accommodate those in 
upgrade-checks.

There are below categorization of policy changes:
1. Policy rule name has been changed. 
Upgrade Impact: If that policy rule is overridden in policy.json then, yes 
we need to tell this in upgrade-check CLI. If not overridden which means 
operators depends on policy in code then, it would not impact their upgrade. 
2. Policy rule (deprecated) has been removed
Upgrade Impact: YES, as it can impact their API access after upgrade.  This 
needs to be cover in upgrade-checks
3. Default value (including scope) of Policy rule has been changed
Upgrade Impact: YES, this can change the access level of their API after 
upgrade. This needs to be cover in upgrade-checks
4. New Policy rule introduced
Upgrade Impact: YES, same reason. 

 I think policy changes can be added in upgrade checker by checking all the 
above category because everything will impact upgrade? 

For Example, cinder policy change [1]:

"Add granularity to the volume_extension:volume_type_encryption policy with the 
addition of distinct actions for create, get, update, and delete:

volume_extension:volume_type_encryption:create
volume_extension:volume_type_encryption:get
volume_extension:volume_type_encryption:update
volume_extension:volume_type_encryption:delete
To address backwards compatibility, the new rules added to the volume_type.py 
policy file, default to the existing rule, 
volume_extension:volume_type_encryption, if it is set to a non-default value. "

[1] https://docs.openstack.org/releasenotes/cinder/unreleased.html#upgrade-notes

-gmann

 > 
 > -- 
 > 
 > Thanks,
 > 
 > Matt
 > 
 > ___
 > OpenStack-operators mailing list
 > openstack-operat...@lists.openstack.org
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] meetings outside IRC

2018-10-14 Thread Ghanshyam Mann



  On Sun, 14 Oct 2018 06:29:46 +0900 Mohammed Naser  
wrote  
 > Hi everyone:
 > 
 > I was going over our governance documents, more specifically this section:
 > 
 > "All project meetings are held in public IRC channels and recorded."
 > 
 > Does this mean that all official projects are *required* to hold their
 > project meetings over IRC?  Is this a hard requirement or is this
 > something that we're a bit more 'lax about?  Do members of the
 > community feel like it would be easier to hold their meetings if we
 > allowed other avenues (assuming this isn't allowed?)
 > 
 > Looking forward to hearing everyone's comments.

Personally I feel IRC is good options which is more comfortable for non-english 
speakers then, video/audio call. But that is for official meeting not other 
ad-hoc/technical discussion which is or can be done on any channel. 

But if any team or all attendees of the meeting are more comfortable in other 
communication channel, we should have flexibility for them. Like PTL discussed 
in his team and everyone decided to use hangout for meeting then, we should not 
restrict them. As long as meeting logs(chat/audio/video) are linked in 
eavesdrop we should be good.

-gmann

 > 
 > Thanks
 > Mohammed
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Discussing goals (upgrades) with community @ office hours

2018-10-14 Thread Ghanshyam Mann
  On Sat, 13 Oct 2018 22:04:16 +0900 Mohammed Naser  
wrote  
 > Hi everyone!
 > 
 > It looks like we're not going to be able to have a TC meeting every 2
 > weeks as I had hoped for, the majority of the TC seems to want to meet
 > once every month.  However, I wanted to ask if the community would be
 > interested in taking one of the upcoming office hours to discuss the
 > current community goals, more specifically upgrades.
 > 
 > It's been brought to my attention by some community members that they
 > feel like we've been deciding goals too early without having enough
 > maturity in terms of implementation.  In addition, it seems like the
 > final implementation way is not fully baked in by the time we create
 > the goal.  This was brought up in the WSGI goal last time and it looks
 > like there is some oddities at the moment with "do we implement our
 > own stuff?" "do we use the new oslo library?" "is the library even
 > ready?"
 > 
 > I wanted to propose one of the upcoming office hours to perhaps invite
 > some of the community members (PTL, developers, anyone!) as well as
 > the TC with goal champions to perhaps discuss some of these goals to
 > help everyone get a clear view on what's going on.
 > 
 > Does this seem like it would be of interest to the community?  I am
 > currently trying to transform our office hours to be more of a space
 > where we have more of the community and less of discussion between us.

Thanks naser, this is good idea. Office hour is perfect time to have more 
technical and help-needed discussions for set goals or cross project work. 
Which office hour(Tue, Wed, Thur) we will use for this discussion? 

 > 
 > Regards,
 > Mohammed
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-10-13 Thread Ghanshyam Mann
  On Sat, 13 Oct 2018 01:45:17 +0900 Lance Bragstad  
wrote  
 > Sending a follow up here quick.
 > The reviewers actively participating in [0] are nearing a conclusion. 
 > Ultimately, the convention is going to be:
 >   
 > :[:][:]:[:]
 > Details about what that actually means can be found in the review [0]. Each 
 > piece is denoted as being required or optional, along with examples. I think 
 > this gives us a pretty good starting place, and the syntax is flexible 
 > enough to support almost every policy naming convention we've stumbled 
 > across.
 > Now is the time if you have any final input or feedback. Thanks for sticking 
 > with the discussion.

Thanks Lance for working on this. Current version lgtm. I would like to see 
some operators feedback also if  this standard policy name format is clear and 
easy understandable. 

-gmann

 > Lance
 > [0] https://review.openstack.org/#/c/606214/
 > 
 > On Mon, Oct 8, 2018 at 8:49 AM Lance Bragstad  wrote:
 > 
 > On Mon, Oct 1, 2018 at 8:13 AM Ghanshyam Mann  
 > wrote:
 >   On Sat, 29 Sep 2018 03:54:01 +0900 Lance Bragstad 
 >  wrote  
 >   > 
 >   > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki  
 > wrote:
 >   > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg
 >   >   wrote:
 >   >  >
 >   >  > Ideally I would like to see it in the form of least specific to most 
 > specific. But more importantly in a way that there is no additional 
 > delimiters between the service type and the resource. Finally, I do not like 
 > the change of plurality depending on action type.
 >   >  >
 >   >  > I propose we consider
 >   >  >
 >   >  > ::[:]
 >   >  >
 >   >  > Example for keystone (note, action names below are strictly examples 
 > I am fine with whatever form those actions take):
 >   >  > identity:projects:create
 >   >  > identity:projects:delete
 >   >  > identity:projects:list
 >   >  > identity:projects:get
 >   >  >
 >   >  > It keeps things simple and consistent when you're looking through 
 > overrides / defaults.
 >   >  > --Morgan
 >   >  +1 -- I think the ordering if `resource` comes before
 >   >  `action|subaction` will be more clean.
 >   > 
 >   > ++
 >   > These are excellent points. I especially like being able to omit the 
 > convention about plurality. Furthermore, I'd like to add that I think we 
 > should make the resource singular (e.g., project instead or projects). For 
 > example:
 >   > compute:server:list
 >   > 
 > compute:server:updatecompute:server:createcompute:server:deletecompute:server:action:rebootcompute:server:action:confirm_resize
 >  (or confirm-resize)
 >  
 >  Do we need "action" word there? I think action name itself should convey 
 > the operation. IMO below notation without "äction" word looks clear enough. 
 > what you say?
 >  
 >  compute:server:reboot
 >  compute:server:confirm_resize
 > 
 > I agree. I simplified this in the current version up for review.  
 >  -gmann
 >  
 >   > 
 >   > Otherwise, someone might mistake compute:servers:get, as "list". This is 
 > ultra-nick-picky, but something I thought of when seeing the usage of 
 > "get_all" in policy names in favor of "list."
 >   > In summary, the new convention based on the most recent feedback should 
 > be:
 >   > ::[:]
 >   > Rules:service-type is always defined in the service types authority
 >   > resources are always singular
 >   > Thanks to all for sticking through this tedious discussion. I appreciate 
 > it.  
 >   >  /R
 >   >  
 >   >  Harry
 >   >  >
 >   >  > On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad  
 > wrote:
 >   >  >>
 >   >  >> Bumping this thread again and proposing two conventions based on the 
 > discussion here. I propose we decide on one of the two following conventions:
 >   >  >>
 >   >  >> ::
 >   >  >>
 >   >  >> or
 >   >  >>
 >   >  >> :_
 >   >  >>
 >   >  >> Where  is the corresponding service type of the 
 > project [0], and  is either create, get, list, update, or delete. I 
 > think decoupling the method from the policy name should aid in consistency, 
 > regardless of the underlying implementation. The HTTP method specifics can 
 > still be relayed using oslo.policy's DocumentedRuleDefault object [1].
 >   >  >>
 >   >  >> I think the plurality of the resource should default to what makes 
 > sense for the operation being carried out (e.g., list:foobars, 
 > create:f

[openstack-dev] [nova] API updates week 18-41

2018-10-11 Thread Ghanshyam Mann
Hi All, 

Please find the Nova API highlights of this week. 

Weekly Office Hour: 
=== 

What we discussed this week: 
- Discussed on api cleanup spec. 

- Discussed api extensions work and pending things on this work. Proposed all 
the pending item for this BP. 

- Discussed on 2 new bugs which needs more log for further debugging. added bug 
comments. 

Planned Features : 
== 
Below are the API related features for Stein. Ref - 
https://etherpad.openstack.org/p/stein-nova-subteam-tracking (feel free to add 
API item there if you are working or found any). NOTE: sequence order are not 
the priority, they are listed as per their start date.  

1. API Extensions merge work 
- https://blueprints.launchpad.net/nova/+spec/api-extensions-merge-stein 
- 
https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:bp/api-extensions-merge-stein+status:open
- Weekly Progress: Pushed all the remaining patches. This is in runway also.

2. Handling a down cell 
- https://blueprints.launchpad.net/nova/+spec/handling-down-cell 
- 
https://review.openstack.org/#/q/topic:bp/handling-down-cell+(status:open+OR+status:merged)
 
- Weekly Progress: No progress. Need to open for stein 

3. Servers Ips non-unique network names : 
- 
https://blueprints.launchpad.net/nova/+spec/servers-ips-non-unique-network-names
 
- Spec Merged 
- 
https://review.openstack.org/#/q/topic:bp/servers-ips-non-unique-network-names+(status:open+OR+status:merged)
 
- Weekly Progress: No progress. I will push code after API extensions work is 
merged. 

4. Volume multiattach enhancements: 
- https://blueprints.launchpad.net/nova/+spec/volume-multiattach-enhancements 
- 
https://review.openstack.org/#/q/topic:bp/volume-multiattach-enhancements+(status:open+OR+status:merged)
 
- Weekly Progress: No progress. 

5. Boot instance specific storage backend
- 
https://blueprints.launchpad.net/nova/+spec/boot-instance-specific-storage-backend
 - 
https://review.openstack.org/#/q/topic:bp/boot-instance-specific-storage-backend+(status:open+OR+status:merged)
- Weekly Progress: Code is up and it is in runway. I am adding this in my 
tomorrow review list. 

6. Add API ref guideline for body text (takashin)
 - https://review.openstack.org/#/c/605628/
- Weekly Progress: patch is up for review. I have reviewed it to map it in more 
structural way.

Specs: 
7. Detach and attach boot volumes
 - 
https://review.openstack.org/#/q/topic:bp/detach-boot-volume+(status:open+OR+status:merged)
- Weekly Progress:  under review. Kevin has updated the spec with review 
comment fix. 

8. Nova API policy updates
https://blueprints.launchpad.net/nova/+spec/granular-api-policy 
Spec: https://review.openstack.org/#/c/547850/
- Weekly Progress: No progress in this. first concentrating on its dependency 
on 'consistent policy name' - https://review.openstack.org/#/c/606214/

9. Nova API cleanup
https://blueprints.launchpad.net/nova/+spec/api-consistency-cleanup 
Spec: https://review.openstack.org/#/c/603969/ 
- Weekly Progress: No progress on this. I am thinking to keep it open till T 
cycle and we keep adding more and more API cleanup in this and then discuss 
that what all we can fix or not. This way we can avoid the re-iterate of API 
cleanup fixes. Obviously we cannot find all API cleanup till T but it is good 
to cover most of them together. Thoughts ? 

10. Support deleting data volume when destroy instance(Brin Zhang)
- https://review.openstack.org/#/c/580336/
- Weekly Progress: No Progress. 

Bugs: 
 
This week Bug Progress: 
https://etherpad.openstack.org/p/nova-api-weekly-bug-report 

Critical: 0->0 
High importance: 1->2 
By Status: 
New: 1->4
Confirmed/Triage: 30-> 32 
In-progress: 31->35 
Incomplete: 3->3
= 
Total: 65->74 

NOTE- There might be some bug which are not tagged as 'api' or 'api-ref', those 
are not in above list. Tag such bugs so that we can keep our eyes. 

-gmann






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] assigning new liaisons to projects

2018-10-08 Thread Ghanshyam Mann



  On Mon, 08 Oct 2018 23:27:06 +0900 Doug Hellmann  
wrote  
 > TC members,
 > 
 > Since we are starting a new term, and have several new members, we need
 > to decide how we want to rotate the liaisons attached to each our
 > project teams, SIGs, and working groups [1].
 > 
 > Last term we went through a period of volunteer sign-up and then I
 > randomly assigned folks to slots to fill out the roster evenly. During
 > the retrospective we talked a bit about how to ensure we had an
 > objective perspective for each team by not having PTLs sign up for their
 > own teams, but I don't think we settled on that as a hard rule.
 > 
 > I think the easiest and fairest (to new members) way to manage the list
 > will be to wipe it and follow the same process we did last time. If you
 > agree, I will update the page this week and we can start collecting
 > volunteers over the next week or so.

+1, sounds good to me.

-gmann

 > 
 > Doug
 > 
 > [1] https://wiki.openstack.org/wiki/OpenStack_health_tracker
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][qa] Enabling online volume_extend tests by default

2018-10-07 Thread Ghanshyam Mann
  On Sat, 06 Oct 2018 01:42:11 +0900 Erlon Cruz  wrote 
 
 > Hey folks,
 > Following up on the discussions that we had on the Denver PTG, the Cinder 
 > teamis planning to enable online volume_extend tests[1] to be run by 
 > default. Currently,those tests are only run by some CI systems and infra 
 > jobs that explicitly set it tobe so.
 > We are also adding a negative test and an associated option  in tempest[2] 
 > to allowvendor drivers that does not support online extending to be tested. 
 > This patch willbe merged first and after a reasonable time for people check 
 > whether their backends supports that or not, we will proceed and merge the 
 > devstack patch[1]triggering the tests in all CIs and infra jobs.

Thanks Erlon. +1 on running those tests on gate.  

Though I have concern over running those tests by default(making config options 
True by default), because it is not confirmed all cinder backends implements 
this functionality and it only works for nova libvirt driver. We need to keep 
config options default as False and Devstack/CI can make it True to run the 
tests. 

If this feature becomes mandatory functionality (or cinder say standard feature 
i think) to implement for every backends and it work with all nova driver 
also(in term of instance action events) then, we can enable this feature tests 
by default. But until then, we should keep them disable by default in Tempest 
but we can enable them on gate via Devstack (patch you mentioned) and test them 
daily on integrated-gate. 

Overall, I am ok with Devstack change to make these tests enable for every 
Cinder backends but we need to keep the config options false in Tempest. 

I will review those patch and leave comments on gerrit (i saw those patch 
introduce new config option than using the existing one)

-gmann

 > Please let us know if you have any question or concerns about it.
 > Kind regards,Erlon_[1] 
 > https://review.openstack.org/#/c/572188/[2] 
 > https://review.openstack.org/#/c/578463/ 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] bringing back formal TC meetings

2018-10-07 Thread Ghanshyam Mann
  On Fri, 05 Oct 2018 22:16:36 +0900 Julia Kreger 
 wrote  
 > +1 to bringing back formal meetings. A few replies below regarding 
 > time/agenda.
 > 
 > On Fri, Oct 5, 2018 at 5:38 AM Doug Hellmann  wrote:
 > Thierry Carrez  writes:
 >  
 >  > Ghanshyam Mann wrote:
 >  >>    On Fri, 05 Oct 2018 02:47:53 +0900 Jeremy Stanley 
 >  wrote 
 >  >>   > On 2018-10-04 13:40:05 -0400 (-0400), Doug Hellmann wrote:
 >  >>   > [...]
 >  >>   > > TC members, please reply to this thread and indicate if you would
 >  >>   > > find meeting at 1300 UTC on the first Thursday of every month
 >  >>   > > acceptable, and of course include any other comments you might
 >  >>   > > have (including alternate times).
 >  >>   >
 >  >>   > This time is acceptable to me. As long as we ensure that community
 >  >>   > feedback continues more frequently in IRC and on the ML (for example
 >  >>   > by making it clear that this meeting is expressly *not* for that)
 >  >>   > then I'm fine with resuming formal meetings.
 >  >> 
 >  >> +1. Time works fine for me, Thanks for considering the APAC TZ.
 >  >> 
 >  >> I agree that we should keep encouraging the  usual discussion in 
 > existing office hours, IRC or ML. I will be definitely able to attend other 
 > 2 office hours (Tuesday  and Wednesday) which are suitable for my TZ.
 >  >
 >  > 1300 UTC is obviously good for me, but once we are off DST that will 
 >  > mean 5am for our Pacific Time people (do we have any left ?).
 >  >
 >  > Maybe 1400 UTC would be a better trade-off?
 >  
 >  Julia is out west, but I think not all the way to PST.
 > 
 > My home time zone is PST. It would be awesome if we could hold the meeting 
 > an hour later, but I can get up early in the morning once a month. If we 
 > choose to meet more regularly, then a one hour later start would be more 
 > appreciated if it is not too much of an inconvenience to APAC TC members. 
 > That being said, I do typically get up early, just not 0500 early that 
 > often.  

One hour later (1400 UTC) also works for me. 

-gmann

 >  > Regarding frequency, I agree with mnaser that once per month might be 
 >  > too rare. That means only 5-ish meetings for a given a 6-month 
 >  > membership. But that can work if we use the meeting as a formal progress 
 >  > status checkpoint, rather than a way to discuss complex topics.
 >  
 >  I think we can definitely manage the agenda to minimize the number of
 >  complex discussions. If that proves to be too hard, I wouldn't mind
 >  meeting more often, but there does seem to be a lot of support for
 >  preferring other venues for those conversations.
 > 
 > 
 > +1 I think there is a point where we need to recognize there is a time and 
 > place for everything, and some of those long running complex conversations 
 > might not be well suited for what would essentially be "review business 
 > status" meetings.  If we have any clue that something is going to be a very 
 > long and drawn out discussion, then I feel like we should make an effort to 
 > schedule individually. 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] bringing back formal TC meetings

2018-10-05 Thread Ghanshyam Mann



  On Fri, 05 Oct 2018 02:47:53 +0900 Jeremy Stanley  
wrote  
 > On 2018-10-04 13:40:05 -0400 (-0400), Doug Hellmann wrote: 
 > [...] 
 > > TC members, please reply to this thread and indicate if you would 
 > > find meeting at 1300 UTC on the first Thursday of every month 
 > > acceptable, and of course include any other comments you might 
 > > have (including alternate times). 
 >  
 > This time is acceptable to me. As long as we ensure that community 
 > feedback continues more frequently in IRC and on the ML (for example 
 > by making it clear that this meeting is expressly *not* for that) 
 > then I'm fine with resuming formal meetings. 

+1. Time works fine for me, Thanks for considering the APAC TZ.

I agree that we should keep encouraging the  usual discussion in existing 
office hours, IRC or ML. I will be definitely able to attend other 2 office 
hours (Tuesday  and Wednesday) which are suitable for my TZ. 

-gmann

 > --  
 > Jeremy Stanley 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][qa] Should we enable multiattach in tempest-full?

2018-10-01 Thread Ghanshyam Mann
  On Mon, 01 Oct 2018 21:22:46 +0900 Erlon Cruz  wrote 
 
 > 
 > 
 > Em seg, 1 de out de 2018 às 05:26, Balázs Gibizer 
 >  escreveu:
 > 
 >  
 >  On Sat, Sep 29, 2018 at 10:35 PM, Matt Riedemann  
 >  wrote:
 >  > Nova, cinder and tempest run the nova-multiattach job in their check 
 >  > and gate queues. The job was added in Queens and was a specific job 
 >  > because we had to change the ubuntu cloud archive we used in Queens 
 >  > to get multiattach working. Since Rocky, devstack defaults to a 
 >  > version of the UCA that works for multiattach, so there isn't really 
 >  > anything preventing us from running the tempest multiattach tests in 
 >  > the integrated gate. The job tries to be as minimal as possible by 
 >  > only running tempest.api.compute.* tests, but it still means spinning 
 >  > up a new node and devstack for testing.
 >  > 
 >  > Given the state of the gate recently, I'm thinking it would be good 
 >  > if we dropped the nova-multiattach job in Stein and just enable the 
 >  > multiattach tests in one of the other integrated gate jobs.
 >  
 >  +1
 >  
 >  > I initially was just going to enable it in the nova-next job, but we 
 >  > don't run that on cinder or tempest changes. I'm not sure if 
 >  > tempest-full is a good place for this though since that job already 
 >  > runs a lot of tests and has been timing out a lot lately [1][2].
 >  > 
 >  > The tempest-slow job is another option, but cinder doesn't currently 
 >  > run that job (it probably should since it runs volume-related tests, 
 >  > including the only tempest tests that use encrypted volumes).
 >  
 >  If the multiattach test qualifies as a slow test then I'm in favor of 
 >  adding it to the tempest-slow and not lengthening the tempest-full 
 >  further.
 >  
 > +1 On having this on tempest-slow and add this to Cinder, provided that it 
 > would also cover encryption .

+1 on adding multiattach on integrated job. It is always good to cover more 
features in integrate-gate instead of separate jobs. These tests does not take 
much time, it should be ok to add in tempest-full [1]. We should make only 
really slow test as 'slow' otherwise it should be fine to run in tempest-full.

I thought adding tempest-slow on cinder was merged but it is not[2]

[1]  
http://logs.openstack.org/80/606880/2/check/nova-multiattach/7f8681e/job-output.txt.gz#_2018-10-01_10_12_55_482653
[2] https://review.openstack.org/#/c/591354/2

-gmann

 >   gibi
 >  
 >  > 
 >  > Are there other ideas/options for enabling multiattach in another job 
 >  > that nova/cinder/tempest already use so we can drop the now mostly 
 >  > redundant nova-multiattach job?
 >  > 
 >  > [1] http://status.openstack.org/elastic-recheck/#1686542
 >  > [2] http://status.openstack.org/elastic-recheck/#1783405
 >  > 
 >  > --
 >  > 
 >  > Thanks,
 >  > 
 >  > Matt
 >  > 
 >  > __
 >  > OpenStack Development Mailing List (not for usage questions)
 >  > Unsubscribe: 
 >  > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 >  > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >  
 >  
 >  __
 >  OpenStack Development Mailing List (not for usage questions)
 >  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 >  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >   __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-10-01 Thread Ghanshyam Mann
  On Sat, 29 Sep 2018 07:23:30 +0900 Lance Bragstad  
wrote  
 > Alright - I've worked up the majority of what we have in this thread and 
 > proposed a documentation patch for oslo.policy [0].
 > I think we're at the point where we can finish the rest of this discussion 
 > in gerrit if folks are ok with that.
 > [0] https://review.openstack.org/#/c/606214/

+1, thanks for that. let's start the discussion there.

-gmann

 > On Fri, Sep 28, 2018 at 3:33 PM Sean McGinnis  wrote:
 > On Fri, Sep 28, 2018 at 01:54:01PM -0500, Lance Bragstad wrote:
 >  > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki  wrote:
 >  > 
 >  > > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg
 >  > >  wrote:
 >  > > >
 >  > > > Ideally I would like to see it in the form of least specific to most
 >  > > specific. But more importantly in a way that there is no additional
 >  > > delimiters between the service type and the resource. Finally, I do not
 >  > > like the change of plurality depending on action type.
 >  > > >
 >  > > > I propose we consider
 >  > > >
 >  > > > ::[:]
 >  > > >
 >  > > > Example for keystone (note, action names below are strictly examples I
 >  > > am fine with whatever form those actions take):
 >  > > > identity:projects:create
 >  > > > identity:projects:delete
 >  > > > identity:projects:list
 >  > > > identity:projects:get
 >  > > >
 >  > > > It keeps things simple and consistent when you're looking through
 >  > > overrides / defaults.
 >  > > > --Morgan
 >  > > +1 -- I think the ordering if `resource` comes before
 >  > > `action|subaction` will be more clean.
 >  > >
 >  > 
 >  
 >  Great idea. This is looking better and better.
 >   __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-10-01 Thread Ghanshyam Mann
  On Sat, 29 Sep 2018 03:54:01 +0900 Lance Bragstad  
wrote  
 > 
 > On Fri, Sep 28, 2018 at 1:03 PM Harry Rybacki  wrote:
 > On Fri, Sep 28, 2018 at 1:57 PM Morgan Fainberg
 >   wrote:
 >  >
 >  > Ideally I would like to see it in the form of least specific to most 
 > specific. But more importantly in a way that there is no additional 
 > delimiters between the service type and the resource. Finally, I do not like 
 > the change of plurality depending on action type.
 >  >
 >  > I propose we consider
 >  >
 >  > ::[:]
 >  >
 >  > Example for keystone (note, action names below are strictly examples I am 
 > fine with whatever form those actions take):
 >  > identity:projects:create
 >  > identity:projects:delete
 >  > identity:projects:list
 >  > identity:projects:get
 >  >
 >  > It keeps things simple and consistent when you're looking through 
 > overrides / defaults.
 >  > --Morgan
 >  +1 -- I think the ordering if `resource` comes before
 >  `action|subaction` will be more clean.
 > 
 > ++
 > These are excellent points. I especially like being able to omit the 
 > convention about plurality. Furthermore, I'd like to add that I think we 
 > should make the resource singular (e.g., project instead or projects). For 
 > example:
 > compute:server:list
 > compute:server:updatecompute:server:createcompute:server:deletecompute:server:action:rebootcompute:server:action:confirm_resize
 >  (or confirm-resize)

Do we need "action" word there? I think action name itself should convey the 
operation. IMO below notation without "äction" word looks clear enough. what 
you say?

compute:server:reboot
compute:server:confirm_resize

-gmann

 > 
 > Otherwise, someone might mistake compute:servers:get, as "list". This is 
 > ultra-nick-picky, but something I thought of when seeing the usage of 
 > "get_all" in policy names in favor of "list."
 > In summary, the new convention based on the most recent feedback should be:
 > ::[:]
 > Rules:service-type is always defined in the service types authority
 > resources are always singular
 > Thanks to all for sticking through this tedious discussion. I appreciate it. 
 >  
 >  /R
 >  
 >  Harry
 >  >
 >  > On Fri, Sep 28, 2018 at 6:49 AM Lance Bragstad  
 > wrote:
 >  >>
 >  >> Bumping this thread again and proposing two conventions based on the 
 > discussion here. I propose we decide on one of the two following conventions:
 >  >>
 >  >> ::
 >  >>
 >  >> or
 >  >>
 >  >> :_
 >  >>
 >  >> Where  is the corresponding service type of the project 
 > [0], and  is either create, get, list, update, or delete. I think 
 > decoupling the method from the policy name should aid in consistency, 
 > regardless of the underlying implementation. The HTTP method specifics can 
 > still be relayed using oslo.policy's DocumentedRuleDefault object [1].
 >  >>
 >  >> I think the plurality of the resource should default to what makes sense 
 > for the operation being carried out (e.g., list:foobars, create:foobar).
 >  >>
 >  >> I don't mind the first one because it's clear about what the delimiter 
 > is and it doesn't look weird when projects have something like:
 >  >>
 >  >> :::
 >  >>
 >  >> If folks are ok with this, I can start working on some documentation 
 > that explains the motivation for this. Afterward, we can figure out how we 
 > want to track this work.
 >  >>
 >  >> What color do you want the shed to be?
 >  >>
 >  >> [0] https://service-types.openstack.org/service-types.json
 >  >> [1] 
 > https://docs.openstack.org/oslo.policy/latest/reference/api/oslo_policy.policy.html#default-rule
 >  >>
 >  >> On Fri, Sep 21, 2018 at 9:13 AM Lance Bragstad  
 > wrote:
 >  >>>
 >  >>>
 >  >>> On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann 
 >  wrote:
 >  >>>>
 >  >>>>   On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt 
 >  wrote 
 >  >>>>  > tl;dr+1 consistent names
 >  >>>>  > I would make the names mirror the API... because the Operator 
 > setting them knows the API, not the codeIgnore the crazy names in Nova, I 
 > certainly hate them
 >  >>>>
 >  >>>> Big +1 on consistent naming  which will help operator as well as 
 > developer to maintain those.
 >  >>>>
 >  >>>>  >
 >  >>>>  > Lance Bragstad  wrote:
 >  >>>>  > > I'm curious if anyone has context on

Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-10-01 Thread Ghanshyam Mann
 On Fri, 21 Sep 2018 23:13:02 +0900 Lance Bragstad  
wrote  
 > 
 > On Fri, Sep 21, 2018 at 2:10 AM Ghanshyam Mann  
 > wrote:
 >   On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt  
 > wrote  
 >   > tl;dr+1 consistent names
 >   > I would make the names mirror the API... because the Operator setting 
 > them knows the API, not the codeIgnore the crazy names in Nova, I certainly 
 > hate them
 > 
 >  Big +1 on consistent naming  which will help operator as well as developer 
 > to maintain those. 
 > 
 >   > 
 >   > Lance Bragstad  wrote:
 >   > > I'm curious if anyone has context on the "os-" part of the format?
 >   > 
 >   > My memory of the Nova policy mess...* Nova's policy rules traditionally 
 > followed the patterns of the code
 >   > ** Yes, horrible, but it happened.* The code used to have the OpenStack 
 > API and the EC2 API, hence the "os"* API used to expand with extensions, so 
 > the policy name is often based on extensions** note most of the extension 
 > code has now gone, including lots of related policies* Policy in code was 
 > focused on getting us to a place where we could rename policy** Whoop whoop 
 > by the way, it feels like we are really close to something sensible now!
 >   > Lance Bragstad  wrote:
 >   > Thoughts on using create, list, update, and delete as opposed to post, 
 > get, put, patch, and delete in the naming convention?
 >   > I could go either way as I think about "list servers" in the API.But my 
 > preference is for the URL stub and POST, GET, etc.
 >   >  On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad  
 > wrote:If we consider dropping "os", should we entertain dropping "api", too? 
 > Do we have a good reason to keep "api"?I wouldn't be opposed to simple 
 > service types (e.g "compute" or "loadbalancer").
 >   > +1The API is known as "compute" in api-ref, so the policy should be for 
 > "compute", etc.
 > 
 >  Agree on mapping the policy name with api-ref as much as possible. Other 
 > than policy name having 'os-', we have 'os-' in resource name also in nova 
 > API url like /os-agents, /os-aggregates etc (almost every resource except 
 > servers , flavors).  As we cannot get rid of those from API url, we need to 
 > keep the same in policy naming too? or we can have policy name like 
 > compute:agents:create/post but that mismatch from api-ref where agents 
 > resource url is os-agents.
 > 
 > Good question. I think this depends on how the service does policy 
 > enforcement.
 > I know we did something like this in keystone, which required policy names 
 > and method names to be the same:
 >   "identity:list_users": "..."
 > Because the initial implementation of policy enforcement used a decorator 
 > like this:
 >   from keystone import controller
 >   @controller.protected  def list_users(self):  ...
 > Having the policy name the same as the method name made it easier for the 
 > decorator implementation to resolve the policy needed to protect the API 
 > because it just looked at the name of the wrapped method. The advantage was 
 > that it was easy to implement new APIs because you only needed to add a 
 > policy, implement the method, and make sure you decorate the implementation.
 > While this worked, we are moving away from it entirely. The decorator 
 > implementation was ridiculously complicated. Only a handful of keystone 
 > developers understood it. With the addition of system-scope, it would have 
 > only become more convoluted. It also enables a much more copy-paste pattern 
 > (e.g., so long as I wrap my method with this decorator implementation, 
 > things should work right?). Instead, we're calling enforcement within the 
 > controller implementation to ensure things are easier to understand. It 
 > requires developers to be cognizant of how different token types affect the 
 > resources within an API. That said, coupling the policy name to the method 
 > name is no longer a requirement for keystone.
 > Hopefully, that helps explain why we needed them to match. 
 >  Also we have action API (i know from nova not sure from other services) 
 > like POST /servers/{server_id}/action {addSecurityGroup} and their current 
 > policy name is all inconsistent.  few have policy name including their 
 > resource name like "os_compute_api:os-flavor-access:add_tenant_access", few 
 > has 'action' in policy name like 
 > "os_compute_api:os-admin-actions:reset_state" and few has direct action name 
 > like "os_compute_api:os-console-output"
 > 
 > Since the actions API relies on the request body and uses a si

Re: [openstack-dev] [qa] [infra] [placement] tempest plugins virtualenv

2018-10-01 Thread Ghanshyam Mann



  On Fri, 28 Sep 2018 23:10:06 +0900 Matthew Treinish 
 wrote  
 > On Fri, Sep 28, 2018 at 02:39:24PM +0100, Chris Dent wrote: 
 > >  
 > > I'm still trying to figure out how to properly create a "modern" (as 
 > > in zuul v3 oriented) integration test for placement using gabbi and 
 > > tempest. That work is happening at 
 > > https://review.openstack.org/#/c/601614/ 
 > >  
 > > There was lots of progress made after the last message on this 
 > > topic 
 > > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134837.html
 > >  
 > > but I've reached another interesting impasse. 
 > >  
 > > From devstack's standpoint, the way to say "I want to use a tempest 
 > > plugin" is to set TEMPEST_PLUGINS to alist of where the plugins are. 
 > > devstack:lib/tempest then does a: 
 > >  
 > > tox -evenv-tempest -- pip install -c 
 > > $REQUIREMENTS_DIR/upper-constraints.txt $TEMPEST_PLUGINS 
 > >  
 > > http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_12_58_138163
 > >  
 > >  
 > > I have this part working as expected. 
 > >  
 > > However, 
 > >  
 > > The advice is then to create a new job that has a parent of 
 > > devstack-tempest. That zuul job runs a variety of tox environments, 
 > > depending on the setting of the `tox_envlist` var. If you wish to 
 > > use a `tempest_test_regex` (I do) the preferred tox environment is 
 > > 'all'. 
 > >  
 > > That venv doesn't have the plugin installed, thus no gabbi tests are 
 > > found: 
 > >  
 > > http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_13_25_798683
 > >  
 >  
 > Right above this line it shows that the gabbi-tempest plugin is installed in 
 > the venv: 
 >  
 > http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/job-output.txt.gz#_2018-09-28_11_13_25_650661
 >  
 >  
 > at version 0.1.1. It's a bit weird because it's line wrapped in my browser. 
 > The devstack logs also shows the plugin: 
 >  
 > http://logs.openstack.org/14/601614/21/check/placement-tempest-gabbi/f44c185/controller/logs/devstacklog.txt.gz#_2018-09-28_11_13_13_076
 >  
 >  
 > All the tempest tox jobs that run tempest (and the tempest-venv command used 
 > by 
 > devstack) run inside the same tox venv: 
 >  
 > https://github.com/openstack/tempest/blob/master/tox.ini#L52 
 >  
 > My guess is that the plugin isn't returning any tests that match the regex. 
 >  
 > I'm also a bit alarmed that tempest run is returning 0 there when no tests 
 > are 
 > being run. That's definitely a bug because things should fail with no tests 
 > being successfully run. 

Tempest run fail on "no test" run [1]

.. [1] 
https://github.com/openstack/tempest/blob/807f0dec66689aced05c2bb970f344cbb8a3c6a3/tempest/cmd/run.py#L182

-gmann

 >  
 > -Matt Treinish 
 >  
 > >  
 > > How do I get my plugin installed into the right venv while still 
 > > following the guidelines for good zuul behavior? 
 > >  
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [PTG][QA] QA PTG Stein Summary

2018-09-27 Thread Ghanshyam Mann
Hi All,

Thanks for joining Stein QA PTG and making it successful. 
I am writing the QA PTG summary and detailed discussion can be found on main 
PTG etherpad - https://etherpad.openstack.org/p/qa-stein-ptg
We are continuing the 'owner' for each working item so that we have single 
point of contact to track those. 


1. QA Help Room
---
QA team was present in Help Room on Monday. We were happy to help few queries  
from Octavia multinode job and Kuryr-kubernetes testing part. 
Other than that, there was not much that day except few other random queries. 

2. Rocky Retrospective
-
We discussed the Rocky Retrospective as first thing on Tuesday. We went through 
1. what went well and 2. what needs to improve and gather some concrete action
items.

Patrole has good progress in Rocky cycle with code as well as documentation.  
Also we were able to fill the compute microversion gap almost all till Rocky. 

Action Items:
- Need to add Tempest CLI documentation and other usefull stuff from tripleo 
Doc  to Tempest Doc - chandankumar
- Run all tests in tempest-full-parallel job and move it to periodic job 
pipeline - afazekas
- Need to merge the QA office hour, check with andrea for 17 UTC office hour 
and if ok then, close that and modify the current office hour from 9 UTC to 8 
UTC .  - gmann
- Need to ask chandankumar or manik for bug triage volunteer. - gmann
- Create the low hanging items list and publish for new contributors - gmann

We will be tracking the above AI in our QA office hour to finish them on time.
Owner: gmann
Etherpad link: https://etherpad.openstack.org/p/qa-rocky-retrospective 


3. Stable interfaces from Tempest Plugins
---
We discussed about having stable interface from Tempest plugins like Tempest so 
that other plugins can consume those. Service client is good example of those 
which are required to do cross project testing. For example: congress tempest 
plugin needs to use mistral service clients for integration testing of 
congress+mistral [1].  Similarly Patrole need to use neutron tempest plugin 
service client(for n/n-1/n-2).  

Idea here is to have lib or stable interface in Tempest plugins side like 
Tempest so that other plugins can use them. We will start with some 
documentation about use case and benefits and then work with 
neutron-tempest-plugin team to make their service client expose as stable 
interface. Once that is done then, we can suggest the same to other plugins.  

Action Items:
- Need some documentation and guidance with use case and example, benefits 
for plugins. - felipemonteiro
- mailing list discussions on making specific plugins stable that are 
consumed by other plugins - felipemonteiro
- check with requirement team to add the tempest plugin in g-r and then 
those can be added on other plugins requirement.txt - gmann
Owner: felipemonteiro
Etherpad link: 
https://etherpad.openstack.org/p/stable-interfaces-from-tempest-plugins


4. Tempest Plugins CI to cover stable branches & Plugins release and tagging 
clarification
--
We discussed about how other projects or Plugins can setup the CI to cover the 
stable branches testing on their master changes. Solution can be simple to 
define the supported stable branches and run them on master gate (same way 
Tempest does).  QA team will start the guidelines on this. 
Other part we need to cover is release and tagging guidelines. There were lot 
of confusion about release of Tempest plugins in Rocky. To make it better, QA 
team will write guidelines and document the clear process. 

Action Items:
- move/update documentation on branchless considerations in tempest to 
somewhere more global so that it covers plugins documentation too - gmann
- Add tagging and release clarification for plugins. 
- talk with neutron team about moving in-tree tempest plugins of stadium 
projects to neutron-tempest-plugin or separate tempest-plugins repositories - 
slaweq
- Add config options to disable the plugins load - gmann
Owner: gmann
Etherpad link: 
https://etherpad.openstack.org/p/tempest-plugins-ci-release-tagging-clarification
 


5. Tempest Cleanup Feature 
-
Current Tempest CLI for cleanup the test resource is not so good. It does 
cleanup the resources based on saved_state.json file which save the resources 
difference before and after Tempest run. This can end up cleaning up the other 
non-test resources which got created during time period of tempest run. 

There is a QA spec which proposing the different approach for cleanup[2]. After 
discussing all those approach, we decided to go with resource_prefix. We will 
bring back the resource_prefix approach (which got removed after deprecation) 
and 

Re: [openstack-dev] [Openstack-operators] [all] Consistent policy names

2018-09-21 Thread Ghanshyam Mann
  On Thu, 20 Sep 2018 18:43:00 +0900 John Garbutt  
wrote  
 > tl;dr+1 consistent names
 > I would make the names mirror the API... because the Operator setting them 
 > knows the API, not the codeIgnore the crazy names in Nova, I certainly hate 
 > them

Big +1 on consistent naming  which will help operator as well as developer to 
maintain those. 

 > 
 > Lance Bragstad  wrote:
 > > I'm curious if anyone has context on the "os-" part of the format?
 > 
 > My memory of the Nova policy mess...* Nova's policy rules traditionally 
 > followed the patterns of the code
 > ** Yes, horrible, but it happened.* The code used to have the OpenStack API 
 > and the EC2 API, hence the "os"* API used to expand with extensions, so the 
 > policy name is often based on extensions** note most of the extension code 
 > has now gone, including lots of related policies* Policy in code was focused 
 > on getting us to a place where we could rename policy** Whoop whoop by the 
 > way, it feels like we are really close to something sensible now!
 > Lance Bragstad  wrote:
 > Thoughts on using create, list, update, and delete as opposed to post, get, 
 > put, patch, and delete in the naming convention?
 > I could go either way as I think about "list servers" in the API.But my 
 > preference is for the URL stub and POST, GET, etc.
 >  On Sun, Sep 16, 2018 at 9:47 PM Lance Bragstad  
 > wrote:If we consider dropping "os", should we entertain dropping "api", too? 
 > Do we have a good reason to keep "api"?I wouldn't be opposed to simple 
 > service types (e.g "compute" or "loadbalancer").
 > +1The API is known as "compute" in api-ref, so the policy should be for 
 > "compute", etc.

Agree on mapping the policy name with api-ref as much as possible. Other than 
policy name having 'os-', we have 'os-' in resource name also in nova API url 
like /os-agents, /os-aggregates etc (almost every resource except servers , 
flavors).  As we cannot get rid of those from API url, we need to keep the same 
in policy naming too? or we can have policy name like 
compute:agents:create/post but that mismatch from api-ref where agents resource 
url is os-agents.

Also we have action API (i know from nova not sure from other services) like 
POST /servers/{server_id}/action {addSecurityGroup} and their current policy 
name is all inconsistent.  few have policy name including their resource name 
like "os_compute_api:os-flavor-access:add_tenant_access", few has 'action' in 
policy name like "os_compute_api:os-admin-actions:reset_state" and few has 
direct action name like "os_compute_api:os-console-output"

May be we can make them consistent with 
:: or any better opinion. 

 > From: Lance Bragstad > The topic of having consistent 
 > policy names has popped up a few times this week.
 > 
 > I would love to have this nailed down before we go through all the policy 
 > rules again. In my head I hope in Nova we can go through each policy rule 
 > and do the following:
 > * move to new consistent policy name, deprecate existing name* hardcode 
 > scope check to project, system or user** (user, yes... keypairs, yuck, but 
 > its how they work)** deprecate in rule scope checks, which are largely bogus 
 > in Nova anyway* make read/write/admin distinction** therefore adding the 
 > "noop" role, amount other things

+ policy granularity. 

It is good idea to make the policy improvement all together and for all rules 
as you mentioned. But my worries is how much load it will be on operator side 
to migrate all policy rules at same time? What will be the deprecation period 
etc which i think we can discuss on proposed spec - 
https://review.openstack.org/#/c/547850

-gmann

 > Thanks,John 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [all][tc] We're combining the lists! (was: Bringing the community together...)

2018-09-20 Thread Ghanshyam Mann



  On Fri, 21 Sep 2018 06:46:43 +0900 Doug Hellmann  
wrote  
 > Excerpts from Jeremy Stanley's message of 2018-09-20 16:32:49 +:
 > > tl;dr: The openstack, openstack-dev, openstack-sigs and
 > > openstack-operators mailing lists (to which this is being sent) will
 > > be replaced by a new openstack-disc...@lists.openstack.org mailing
 > > list.
 > 
 > Since last week there was some discussion of including the openstack-tc
 > mailing list among these lists to eliminate confusion caused by the fact
 > that the list is not configured to accept messages from all subscribers
 > (it's meant to be used for us to make sure TC members see meeting
 > announcements).
 > 
 > I'm inclined to include it and either use a direct mailing or the
 > [tc] tag on the new discuss list to reach TC members, but I would
 > like to hear feedback from TC members and other interested parties
 > before calling that decision made. Please let me know what you think.

+1 on including the openstack-tc also. That will help to get more attentions on 
TC discussions from other group also. 

-gmann

 > 
 > Doug
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] When can/should we change additionalProperties=False in GET /servers(/detail)?

2018-09-19 Thread Ghanshyam Mann



  On Wed, 19 Sep 2018 02:26:30 +0900 Matt Riedemann  
wrote  
 > On 9/17/2018 9:41 PM, Ghanshyam Mann wrote:
 > >    On Tue, 18 Sep 2018 09:33:30 +0900 Alex Xu  wrote 
 > > 
 > >   > That only means after 599276 we only have servers API and 
 > > os-instance-action API stopped accepting the undefined query parameter.
 > >   > What I'm thinking about is checking all the APIs, add json-query-param 
 > > checking with additionalProperties=True if the API don't have yet. And 
 > > using another microversion set additionalProperties to False, then the 
 > > whole Nova API become consistent.
 > > 
 > > I too vote for doing it for all other API together. Restricting the 
 > > unknown query or request param are very useful for API consistency. Item#1 
 > > in this etherpadhttps://etherpad.openstack.org/p/nova-api-cleanup
 > > 
 > > If you would like, i can propose a quick spec for that and positive 
 > > response to do all together then we skip to do that in 599276 otherwise do 
 > > it for GET servers in 599276.
 > > 
 > > -gmann
 > 
 > I don't care too much about changing all of the other 
 > additionalProperties=False in a single microversion given we're already 
 > kind of inconsistent with this in a few APIs. Consistency is ideal, but 
 > I thought we'd be lumping in other cleanups from the etherpad into the 
 > same microversion/spec which will likely slow it down during spec 
 > review. For example, I'd really like to get rid of the weird server 
 > response field prefixes like "OS-EXT-SRV-ATTR:". Would we put those into 
 > the same mass cleanup microversion / spec or split them into individual 
 > microversions? I'd prefer not to see an explosion of microversions for 
 > cleaning up oddities in the API, but I could see how doing them all in a 
 > single microversion could be complicated.

Sounds good to me. I also do not feel like increasing  microversions for every 
cleanup. I would like to see all cleanup(worthy cleanup) in single 
microversion. I have pushed the spec for that for further discussion/debate. - 
https://review.openstack.org/#/c/603969/

-gmann
 > -- 
 > 
 > Thanks,
 > 
 > Matt
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [placement] [infra] [qa] tuning some zuul jobs from "it works" to "proper"

2018-09-19 Thread Ghanshyam Mann
  On Wed, 19 Sep 2018 23:29:46 +0900 Monty Taylor  
wrote  
 > On 09/19/2018 09:23 AM, Monty Taylor wrote:
 > > On 09/19/2018 08:25 AM, Chris Dent wrote:
 > >>
 > >> I have a patch in progress to add some simple integration tests to
 > >> placement:
 > >>
 > >>  https://review.openstack.org/#/c/601614/
 > >>
 > >> They use https://github.com/cdent/gabbi-tempest . The idea is that
 > >> the method for adding more tests is to simply add more yaml in
 > >> gate/gabbits, without needing to worry about adding to or think
 > >> about tempest.
 > >>
 > >> What I have at that patch works; there are two yaml files, one of
 > >> which goes through the process of confirming the existence of a
 > >> resource provider and inventory, booting a server, seeing a change
 > >> in allocations, resizing the server, seeing a change in allocations.
 > >>
 > >> But this is kludgy in a variety of ways and I'm hoping to get some
 > >> help or pointers to the right way. I'm posting here instead of
 > >> asking in IRC as I assume other people confront these same
 > >> confusions. The issues:
 > >>
 > >> * The associated playbooks are cargo-culted from stuff labelled
 > >>"legacy" that I was able to find in nova's jobs. I get the
 > >>impression that these are more verbose and duplicative than they
 > >>need to be and are not aligned with modern zuul v3 coolness.
 > > 
 > > Yes. Your life will be much better if you do not make more legacy jobs. 
 > > They are brittle and hard to work with.
 > > 
 > > New jobs should either use the devstack base job, the devstack-tempest 
 > > base job or the devstack-tox-functional base job - depending on what 
 > > things are intended.

+1. All the base job from Tempest and Devstack (except grenade which is in 
progress) are available to use as base for legacy jobs. Using devstack-temepst 
in your patch is right things. In addition, you need to mention the tox_envlist 
as all-plugins to make tempest_test_regex work. I commented on review. 

 > > 
 > > You might want to check out:
 > > 
 > > https://docs.openstack.org/devstack/latest/zuul_ci_jobs_migration.html
 > > 
 > > also, cmurphy has been working on updating some of keystone's legacy 
 > > jobs recently:
 > > 
 > > https://review.openstack.org/602452
 > > 
 > > which might also be a source for copying from.
 > > 
 > >> * It takes an age for the underlying devstack to build, I can
 > >>presumably save some time by installing fewer services, and making
 > >>it obvious how to add more when more are required. What's the
 > >>canonical way to do this? Mess with {enable,disable}_service, cook
 > >>the ENABLED_SERVICES var, do something with required_projects?
 > > 
 > > http://git.openstack.org/cgit/openstack/openstacksdk/tree/.zuul.yaml#n190
 > > 
 > > Has an example of disabling services, of adding a devstack plugin, and 
 > > of adding some lines to localrc.
 > > 
 > > 
 > > http://git.openstack.org/cgit/openstack/openstacksdk/tree/.zuul.yaml#n117
 > > 
 > > Has some more complex config bits in it.
 > > 
 > > In your case, I believe you want to have parent: devstack-tempest 
 > > instead of parent: devstack-tox-functional
 > > 
 > > 
 > >> * This patch, and the one that follows it [1] dynamically install
 > >>stuff from pypi in the post test hooks, simply because that was
 > >>the quick and dirty way to get those libs in the environment.
 > >>What's the clean and proper way? gabbi-tempest itself needs to be
 > >>in the tempest virtualenv.
 > > 
 > > This I don't have an answer for. I'm guessing this is something one 
 > > could do with a tempest plugin?
 > 
 > K. This:
 > 
 > http://git.openstack.org/cgit/openstack/neutron-tempest-plugin/tree/.zuul.yaml#n184

Yeah, You can install that via TEMPEST_PLUGINS var. All plugins specified in 
TEMPEST_PLUGINS var, will be installed into the tempest venv[1]. You can 
mention the gabbi-tempest same way. 

[1] 
https://github.com/openstack-dev/devstack/blob/6f4b7fc99c4029d25a924bcad968089d89e9d296/lib/tempest#L663

-gmann

 > 
 > Has an example of a job using a tempest plugin.
 > 
 > >> * The post.yaml playbook which gathers up logs seems like a common
 > >>thing, so I would hope could be DRYed up a bit. What's the best
 > >>way to that?
 > > 
 > > Yup. Legacy devstack-gate based jobs are pretty terrible.
 > > 
 > > You can delete the entire post.yaml if you move to the new devstack base 
 > > job.
 > > 
 > > The base devstack job has a much better mechanism for gathering logs.
 > > 
 > >> Thanks very much for any input.
 > >>
 > >> [1] perf logging of a loaded placement: 
 > >> https://review.openstack.org/#/c/602484/
 > >>
 > >>
 > >>
 > >> __
 > >>  
 > >>
 > >> OpenStack Development Mailing List (not for usage questions)
 > >> Unsubscribe: 
 > >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > >> 

Re: [openstack-dev] [python3] tempest and grenade conversion to python 3.6

2018-09-18 Thread Ghanshyam Mann



  On Wed, 19 Sep 2018 02:28:29 +0900 Doug Hellmann  
wrote  
 > Excerpts from Clark Boylan's message of 2018-09-18 09:53:45 -0700:
 > > On Tue, Sep 18, 2018, at 9:46 AM, Nate Johnston wrote:
 > > > Hello python 3.6 champions,
 > > > 
 > > > I have looked around a little, and I don't see a method for me to
 > > > specifically select the version of python that the tempest and grenade
 > > > jobs for my project (neutron) are using.  I assume one of four things
 > > > is at play here:
 > > > 
 > > > A. These projects already shifted to python 3 and I don't have to worry
 > > > about it
 > > > 
 > > > B. There is a toggle for the python version I just have not seen yet
 > > > 
 > > > C. These projects are still on python 2 and need help to do a conversion
 > > > to python 3, which would affect all customers
 > > > 
 > > > D. Something else that I have failed to imagine
 > > > 
 > > > Could you elaborate which of these options properly reflects the state
 > > > of affairs?  If the answer is "C" then perhaps we can start a discussion
 > > > on that migration.
 > > 
 > > For our devstack and grenade jobs tempest is installed using tox [0]. And 
 > > since the full testenv in tempest's tox.ini doesn't specify a python 
 > > version [1] I expect that it will attempt a python2 virtualenv on every 
 > > platform (Arch linux may be an exception but we don't test that).
 > > 
 > > I think that means C is the situation here. To change that you can set 
 > > basepython to python3 (see [2] for an example) which will run tempest 
 > > under whichever python3 is present on the system. The one gotcha for this 
 > > is that it will break tempest on centos which does not have python3. Maybe 
 > > the thing to do there is add a full-python2 testenv that centos can run?
 > > 
 > > [0] 
 > > https://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/tempest#n653
 > > [1] https://git.openstack.org/cgit/openstack/tempest/tree/tox.ini#n74
 > > [2] https://git.openstack.org/cgit/openstack-infra/zuul/tree/tox.ini#n7
 > > 
 > > Hope this helps,
 > > Clark
 > > 
 > 
 > While having tempest run under python 3 would be great, I'm not sure
 > that's necessary in order to test a service.
 > 
 > Don't those jobs use devstack to install the system being tested? And
 > devstack uses some environment variables to control the version of
 > python. For example the tempest-full-py3 job [1] defines USE_PYTHON3 as
 > 'true'.
 > 
 > What's probably missing is a version of the grenade job that allows us
 > to control that USE_PYTHON3 variable before and after the upgrade.
 > 
 > I see a few different grenade jobs (neutron-grenade,
 > neutron-grenade-multinode,
 > legacy-grenade-dsvm-neutron-multinode-live-migration, possibly others).
 > Which ones are "current" and would make a good candidate as a base for a
 > new job?

All these are legacy job,  only name changed so i will not recommend them to 
use as base. Currently those are on neutron repo instead of grenade. We 
discussed this in PTG about finishing the grenade base zuul v3 job work so that 
other project can use as base. Work is in progress[1] and on priority[2] for us 
to finish as early as possible. 

[1] 
https://review.openstack.org/#/q/topic:grenade_zuulv3+(status:open+OR+status:merged)
[2] https://etherpad.openstack.org/p/qa-stein-priority

-gmann
 > 
 > Doug
 > 
 > [1] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n70
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] When can/should we change additionalProperties=False in GET /servers(/detail)?

2018-09-17 Thread Ghanshyam Mann
  On Tue, 18 Sep 2018 09:33:30 +0900 Alex Xu  wrote  
 > That only means after 599276 we only have servers API and os-instance-action 
 > API stopped accepting the undefined query parameter.
 > What I'm thinking about is checking all the APIs, add json-query-param 
 > checking with additionalProperties=True if the API don't have yet. And using 
 > another microversion set additionalProperties to False, then the whole Nova 
 > API become consistent.

I too vote for doing it for all other API together. Restricting the unknown 
query or request param are very useful for API consistency. Item#1 in this 
etherpad https://etherpad.openstack.org/p/nova-api-cleanup

If you would like, i can propose a quick spec for that and positive response to 
do all together then we skip to do that in 599276 otherwise do it for GET 
servers in 599276. 

-gmann

 > Jay Pipes  于2018年9月18日周二 上午4:07写道:
 >  __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > On 09/17/2018 03:28 PM, Matt Riedemann wrote:
 >  > This is a question from a change [1] which adds a new changes-before 
 >  > filter to the servers, os-instance-actions and os-migrations APIs.
 >  > 
 >  > For context, the os-instance-actions API stopped accepting undefined 
 >  > query parameters in 2.58 when we added paging support.
 >  > 
 >  > The os-migrations API stopped allowing undefined query parameters in 
 >  > 2.59 when we added paging support.
 >  > 
 >  > The open question on the review is if we should change GET /servers and 
 >  > GET /servers/detail to stop allowing undefined query parameters starting 
 >  > with microversion 2.66 [2]. Apparently when we added support for 2.5 and 
 >  > 2.26 for listing servers we didn't think about this. It means that a 
 >  > user can specify a query parameter, documented in the API reference, but 
 >  > with an older microversion and it will be silently ignored. That is 
 >  > backward compatible but confusing from an end user perspective since it 
 >  > would appear to them that the filter is not being applied, when it fact 
 >  > it would be if they used the correct microversion.
 >  > 
 >  > So do we want to start enforcing query parameters when listing servers 
 >  > to our defined list with microversion 2.66 or just continue to silently 
 >  > ignore them if used incorrectly?
 >  > 
 >  > Note that starting in Rocky, the Neutron API will start rejecting 
 >  > unknown query parameteres [3] if the filter-validation extension is 
 >  > enabled (since Neutron doesn't use microversions). So there is some 
 >  > precedent in OpenStack for starting to enforce query parameters.
 >  > 
 >  > [1] https://review.openstack.org/#/c/599276/
 >  > [2] 
 >  > 
 > https://review.openstack.org/#/c/599276/23/nova/api/openstack/compute/schemas/servers.py
 >  
 >  > 
 >  > [3] 
 >  > https://docs.openstack.org/releasenotes/neutron/rocky.html#upgrade-notes
 >  
 >  My vote would be just change additionalProperties to False in the 599276 
 >  patch and be done with it.
 >  
 >  Add a release note about the change, of course.
 >  
 >  -jay
 >  
 >  __
 >  OpenStack Development Mailing List (not for usage questions)
 >  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 >  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >  



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [tc]Global Reachout Proposal

2018-09-17 Thread Ghanshyam Mann
  On Sat, 15 Sep 2018 02:49:40 +0900 Zhipeng Huang  
wrote  
 > Hi all,
 > Follow up the diversity discussion we had in the tc session this morning 
 > [0], I've proposed a resolution on facilitating technical community in large 
 > to engage in global reachout for OpenStack more efficiently. 
 > Your feedbacks are welcomed. Whether this should be a new resolution or not 
 > at the end of the day, this is a conversation worthy to have.
 > [0] https://review.openstack.org/602697

I like that we are discussing the Global Reachout things which i personally 
feel is very important. There are many obstacle to have a standard global 
communication way. Honestly saying, there cannot be any standard communication 
channel which can accommodate different language, cultures , company/govt 
restriction. So the better we can do is best solution. 

I can understand that IRC cannot be used in China which is very painful and 
mostly it is used weChat. But there are few key points we need to consider for 
any social app to use?
- Technical discussions which needs more people to participate and need ref of 
links etc cannot be done on mobile app. You need desktop version of that app.
- Many of the social app have # of participation, invitation, logging 
restriction. 
- Those apps are not restricted to other place.
- It does not split the community members among more than one app or exiting 
channel.

With all those point, we need to think what all communication channel we really 
want to promote as community. 

IMO, we should educate and motivate people to participate over existing channel 
like IRC,  ML as much as possible. At least ML does not have any issue about 
usage. Ambassador and local user groups people can play a critical role here or 
local developers (i saw Alex volunteer for nova discussion in china) and they 
can ask them to start communication in ML or if they cannot then they can start 
the thread and proxy for them. 

I know slack is being used for Japan community and most of the communication 
there is in Japanese so i cannot help there even I join it. When talking to 
Akira (Japan Ambassador ) and as per him most of the developers do communicate 
in IRC, ML but users hesitate to do so because of culture and language. 

So if proposal is to participate community (Developers, TC, UC, Ambassador, 
User Group members etc) in local chat app and encourage people to move to ML 
etc then it is great idea. But if we want to promote all different chat app as 
community practice then, it can lead to lot of other problems than solving the 
current one.  For example:  It will divide the technical discussion etc

-gmann

 > -- 
 > Zhipeng (Howard) Huang
 > Standard EngineerIT Standard & Patent/IT Product LineHuawei Technologies 
 > Co,. LtdEmail: huangzhipeng@huawei.comOffice: Huawei Industrial Base, 
 > Longgang, Shenzhen
 > (Previous)
 > Research AssistantMobile Ad-Hoc Network Lab, Calit2University of California, 
 > IrvineEmail: zhipengh@uci.eduOffice: Calit2 Building Room 2402
 > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado 
 > ___
 > OpenStack-operators mailing list
 > openstack-operat...@lists.openstack.org
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] Open letter/request to TC candidates (and existing elected officials)

2018-09-13 Thread Ghanshyam Mann
  On Thu, 13 Sep 2018 08:05:17 +0900 Lance Bragstad  
wrote  
 > 
 > 
 > On Wed, Sep 12, 2018 at 3:55 PM Jeremy Stanley  wrote:
 > On 2018-09-12 09:47:27 -0600 (-0600), Matt Riedemann wrote:
 >  [...]
 >  > So I encourage all elected TC members to work directly with the
 >  > various SIGs to figure out their top issue and then work on
 >  > managing those deliverables across the community because the TC is
 >  > particularly well suited to do so given the elected position.
 >  [...]
 >  
 >  I almost agree with you. I think the OpenStack TC members should be
 >  actively engaged in recruiting and enabling interested people in the
 >  community to do those things, but I don't think such work should be
 >  solely the domain of the TC and would hate to give the impression
 >  that you must be on the TC to have such an impact.
 > 
 > I agree that relaying that type of impression would be negative, but I'm not 
 > sure this specifically would do that. I think we've been good about letting 
 > people step up to drive initiatives without being in an elected position [0].
 > IMHO, I think the point Matt is making here is more about ensuring sure we 
 > have people to do what we've agreed upon, as a community, as being mission 
 > critical. Enablement is imperative, but no matter how good we are at it, 
 > sometimes we really just needs hands to do the work.
 > [0] Of the six goals agreed upon since we've implemented champions in 
 > Queens, five of them have been championed by non-TC members (Chandan 
 > championed two, in back-to-back releases).  -- 

True, doing any such cross project work  does not or should not require to be 
TC. And i do not think anyone has objection on this statement. 

Yes, recruiting the people is the key things here and TC can play the ownership 
role in this. I am sure having more and more people involved in such cross 
project work will surly help to find the new leaders. There are lot of 
contributors, who might have bandwidth but not coming up for cross project 
help. Such initiate from TC can help them to come forward.  And any other cross 
project work lead by non-TC will always be great example for TC to encourage 
the other contributors for such activity. 

But key point here is, if there is no one stepped up for priority cross project 
work(much needed for openstack production use case) then, TC can play role to 
find/self owner for that work. 

-gmann

 >  Jeremy Stanley
 >  __
 >  OpenStack Development Mailing List (not for usage questions)
 >  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 >  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >   __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Open letter/request to TC candidates (and existing elected officials)

2018-09-13 Thread Ghanshyam Mann
  On Thu, 13 Sep 2018 00:47:27 +0900 Matt Riedemann  
wrote  
 > Rather than take a tangent on Kristi's candidacy thread [1], I'll bring 
 > this up separately.
 > 
 > Kristi said:
 > 
 > "Ultimately, this list isn’t exclusive and I’d love to hear your and 
 > other people's opinions about what you think the I should focus on."
 > 
 > Well since you asked...
 > 
 > Some feedback I gave to the public cloud work group yesterday was to get 
 > their RFE/bug list ranked from the operator community (because some of 
 > the requests are not exclusive to public cloud), and then put pressure 
 > on the TC to help project manage the delivery of the top issue. I would 
 > like all of the SIGs to do this. The upgrades SIG should rank and 
 > socialize their #1 issue that needs attention from the developer 
 > community - maybe that's better upgrade CI testing for deployment 
 > projects, maybe it's getting the pre-upgrade checks goal done for Stein. 
 > The UC should also be doing this; maybe that's the UC saying, "we need 
 > help on closing feature gaps in openstack client and/or the SDK". I 
 > don't want SIGs to bombard the developers with *all* of their 
 > requirements, but I want to get past *talking* about the *same* issues 
 > *every* time we get together. I want each group to say, "this is our top 
 > issue and we want developers to focus on it." For example, the extended 
 > maintenance resolution [2] was purely birthed from frustration about 
 > talking about LTS and stable branch EOL every time we get together. It's 
 > also the responsibility of the operator and user communities to weigh in 
 > on proposed release goals, but the TC should be actively trying to get 
 > feedback from those communities about proposed goals, because I bet 
 > operators and users don't care about mox removal [3].

I agree on this and i feel this is real value  we can add with current 
situation where contributors are less in almost all of the projects. When we 
set goal for any cycle, we should have user/operator/SIG weightage on priority 
in selection checklist and categorize the goal into respective category/tag 
something like "user-oriented"  or "coding-oriented"(only developer/maintaining 
code benefits).  And then we concentrate more on first category and leave 
second one more on project team. Project team further can plan the second 
catagory items as per their bandwidth and priority.  I am not saying 
code/developer oriented goals should not be initiated by TC but those should be 
on low priority list kind of. 

-gmann

 > 
 > I want to see the TC be more of a cross-project project management 
 > group, like a group of Ildikos and what she did between nova and cinder 
 > to get volume multi-attach done, which took persistent supervision to 
 > herd the cats and get it delivered. Lance is already trying to do this 
 > with unified limits. Doug is doing this with the python3 goal. I want my 
 > elected TC members to be pushing tangible technical deliverables forward.
 > 
 > I don't find any value in the TC debating ad nauseam about visions and 
 > constellations and "what is openstack?". Scope will change over time 
 > depending on who is contributing to openstack, we should just accept 
 > this. And we need to realize that if we are failing to deliver value to 
 > operators and users, they aren't going to use openstack and then "what 
 > is openstack?" won't matter because no one will care.
 > 
 > So I encourage all elected TC members to work directly with the various 
 > SIGs to figure out their top issue and then work on managing those 
 > deliverables across the community because the TC is particularly well 
 > suited to do so given the elected position. I realize political and 
 > bureaucratic "how should openstack deal with x?" things will come up, 
 > but those should not be the priority of the TC. So instead of 
 > philosophizing about things like, "should all compute agents be in a 
 > single service with a REST API" for hours and hours, every few months - 
 > immediately ask, "would doing that get us any closer to achieving top 
 > technical priority x?" Because if not, or it's so fuzzy in scope that no 
 > one sees the way forward, document a decision and then drop it.
 > 
 > [1] 
 > http://lists.openstack.org/pipermail/openstack-dev/2018-September/134490.html
 > [2] 
 > https://governance.openstack.org/tc/resolutions/20180301-stable-branch-eol.html
 > [3] https://governance.openstack.org/tc/goals/rocky/mox_removal.html
 > 
 > -- 
 > 
 > Thanks,
 > 
 > Matt
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [goals][python3] mixed versions?

2018-09-13 Thread Ghanshyam Mann



  On Thu, 13 Sep 2018 22:10:48 +0900 Doug Hellmann  
wrote  
 > Excerpts from Thomas Goirand's message of 2018-09-13 12:23:32 +0200:
 > > On 09/13/2018 12:52 AM, Chris Friesen wrote:
 > > > On 9/12/2018 12:04 PM, Doug Hellmann wrote:
 > > > 
 > > >>> This came up in a Vancouver summit session (the python3 one I think).
 > > >>> General consensus there seemed to be that we should have grenade jobs
 > > >>> that run python2 on the old side and python3 on the new side and test
 > > >>> the update from one to another through a release that way.
 > > >>> Additionally there was thought that the nova partial job (and similar
 > > >>> grenade jobs) could hold the non upgraded node on python2 and that
 > > >>> would talk to a python3 control plane.
 > > >>>
 > > >>> I haven't seen or heard of anyone working on this yet though.
 > > >>>
 > > >>> Clark
 > > >>>
 > > >>
 > > >> IIRC, we also talked about not supporting multiple versions of
 > > >> python on a given node, so all of the services on a node would need
 > > >> to be upgraded together.
 > > > 
 > > > As I understand it, the various services talk to each other using
 > > > over-the-wire protocols.  Assuming this is correct, why would we need to
 > > > ensure they are using the same python version?
 > > > 
 > > > Chris
 > > 
 > > There are indeed a few cases were things can break, especially with
 > > character encoding. If you want an example of what may go wrong, here's
 > > one with Cinder and Ceph:
 > > 
 > > https://review.openstack.org/568813
 > > 
 > > Without the encodeutils.safe_decode() call, Cinder over Ceph was just
 > > crashing for me in Debian (Debian is full Python 3 now...). In this
 > > example, we're just over the wire, and it was supposed to be the same.
 > > Yet, only an integration test could have detect it (and I discovered it
 > > running puppet-openstack on Debian).

I think that should be detected by py3 ceph job  
"legacy-tempest-dsvm-py35-full-devstack-plugin-ceph". Was that failing or 
anyone checked its status during failure. This job is experimental in cinder 
gate[1] so i could not get its failure data from health-dashboard.
May be we should move it to check pipeline to cover cinder+ceph for py3 ?

[1] 
https://github.com/openstack-infra/project-config/blob/4eeec4cc6e18dd8933b16a2ddda75b469b893437/zuul.d/projects.yaml#L3471

-gmann
 > 
 > Was that caused (or found) by first running cinder under python 2
 > and then upgrading to python 3 on the same host? That's the test
 > case Jim originally suggested and I'm trying to understand if we
 > actually need it.
 > 
 > Doug
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][PTG] QA Dinner Night

2018-09-11 Thread Ghanshyam Mann
Hi All,

We have finalized the place and time for QA dinner which is tomorrow night. 

Here are the details:

Restaurant :  Famous Dave's  - https://goo.gl/maps/G7gjpsJUEV72 
Wednesday night, 6:30 PM
Meeting time at lobby: 6.15 PM

-gmann


  On Mon, 10 Sep 2018 20:13:15 +0900 Ghanshyam Mann 
 wrote  
 >  
 >  
 >  
 >   On Mon, 10 Sep 2018 19:35:58 +0900 Andreas Jaeger  
 > wrote   
 >  > On 10/09/2018 12.00, Ghanshyam Mann wrote:  
 >  > > Hi All,  
 >  > >   
 >  > > I'd like to propose a QA Dinner night for the QA team at the DENVER 
 > PTG. I initiated a doodle vote [1] to choose Tuesday or Wednesday night.  
 >  >   
 >  > Dublin or Denver? Hope you're not time traveling or went to wrong   
 >  > location ;)  
 >  >   
 >  
 > heh, thanks for correction. Yes it is Denver :).  
 >  
 >  
 >  > Andreas  
 >  >   
 >  > > NOTE: Anyone engaged in QA activities (not necessary to be QA core)  
 > are welcome to join.  
 >  > >   
 >  > >   
 >  > > [1] https://doodle.com/poll/68fudz937v22ghnv  
 >  > >   
 >  > > -gmann  
 >  > >   
 >  > >   
 >  > >   
 >  > >   
 >  > >   
 >  > > 
 > __  
 >  > > OpenStack Development Mailing List (not for usage questions)  
 >  > > Unsubscribe: 
 > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe  
 >  > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev  
 >  > >   
 >  >   
 >  >   
 >  > --   
 >  >   Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi  
 >  >SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany  
 >  > GF: Felix Imendörffer, Jane Smithard, Graham Norton,  
 >  > HRB 21284 (AG Nürnberg)  
 >  >  GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126 
 >  
 >  >   
 >  >  
 >  
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] [all] QA Stein PTG Planning

2018-09-10 Thread Ghanshyam Mann
There are more Topic to discuss for QA at last movement [1]. I have added them 
in schedule and there are few topic has been re-scheduled due to that, please 
check the latest schedule for QA topic here[2]

I have created the dedicated Etherpad for each topic, links are in main 
Etherpad[1]. Request all the Topic owner to fill the details on respective 
Etherpads well before the Topic Schedule.  

[1] https://etherpad.openstack.org/p/qa-stein-ptg 
[2] https://ethercalc.openstack.org/Stein-PTG-QA-Schedule 


-gmann

  On Wed, 05 Sep 2018 17:34:27 +0900 Ghanshyam Mann 
 wrote  
 > Hi All,
 > 
 > As we are close to PTG, I have prepared the QA Stein PTG Schedule -
 > https://ethercalc.openstack.org/Stein-PTG-QA-Schedule 
 > 
 > Detail of each sessions can be found in this etherpad -
 > https://etherpad.openstack.org/p/qa-stein-ptg 
 > 
 > This time we will have QA Help Hour for 1 day only which is on Monday and 
 > next 3 days for specific topic discussion and code sprint. 
 > We still have space for more sessions or topic if any of you would like to 
 > add. If so please write those to etherpad with your irc name.
 > Sessions Scheduled is flexible and we can reschedule based on request but do 
 > let me know before 7th Sept.
 > 
 > If anyone cannot travel to PTG and would like to attend remotely, do let me 
 > know i can plan something for remote participation. 
 > 
 > -gmann
 > 
 > 
 > 
 > 
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [election][tc] Opinion about 'PTL' tooling

2018-09-10 Thread Ghanshyam Mann
  On Mon, 10 Sep 2018 20:31:11 +0900 Doug Hellmann  
wrote  
 > Excerpts from jean-phili...@evrard.me's message of 2018-09-10 13:15:02 +0200:
 > > Hello everyone,
 > > 
 > > In my candidacy [1], I mentioned that the TC should provide more tools to 
 > > help the PTLs at their duties, for example to track community health.
 > > 
 > > I have questions for the TC candidates:
 > > - What is your opinion about said toolkit? Do you see a purpose for it?
 > > - Do you think said toolkit should fall under the TC umbrella?
 > > 
 > > After my discussion with Rico Lin (PTL of the Heat project, and TC 
 > > candidate) yesterday, I am personally convinced that it would be a good 
 > > idea, and that we should have those tools: As a PTL (but also any person 
 > > interested to see health of projects) I wanted it and I am not alone. PTLs 
 > > are focusing on their duties and, as a day is only composed of so few 
 > > hours, it is possible they won't have the focus to work on said tools to 
 > > track, in the longer term, the community.
 > > 
 > > For me, tracking community health (and therefore a toolkit for the 
 > > PTLs/community) is something TC should cover for good governance, and I am 
 > > not aware of any tooling extracting metrics that can be easily visible and 
 > > used by anyone. If each project started to have their own implementation 
 > > of tools, it would be opposite to one of my other goals, which is the 
 > > simplification of OpenStack.
 > > 
 > > Thanks for reading me, and do not hesitate to ask me questions on the 
 > > mailing lists, or in real life during the PTG!
 > > 
 > > Regards,
 > > Jean-Philippe Evrard (evrardjp)
 > > 
 > > [1]: 
 > > https://git.openstack.org/cgit/openstack/election/plain/candidates/stein/TC/jean-phili...@evrard.me
 > > 
 > 
 > We've had several different sets of scripts at different times to
 > extract review statistics from gerrit. Is that the sort of thing you
 > mean?
 > 
 > What information would you find useful?

Yeah, if we can have exact requirement or Action items PTL miss then, it will 
be more clear about having such tooling. Overall I like the idea of giving more 
awareness about PTL work but that is more kind of teaching and guiding the PTL. 
Before we think of tool to manage  PTL responsibility, we need to list down the 
issues it will solve. 

Personally as PTL, I have gone through PTL responsibility guide[1] and filtered 
the PTL tagged email which i daily check on priority. Further I follow the TODO 
things PTL has to do in release, PTG, Summit etc. which work perfectly for me. 
I find this more PTL responsibility than TC track those for PTL. 

That's my point of view as PTL and as TC candidate but i would like to hear 
from other PTLs  on this if they need help on their responsibility tracking  
and why. 


[1] https://docs.openstack.org/project-team-guide/ptl.html

 > 
 > Doug
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][PTG] QA Dinner Night

2018-09-10 Thread Ghanshyam Mann
Hi All,

I'd like to propose a QA Dinner night for the QA team at the Dublin PTG. I 
initiated a doodle vote [1] to choose Tuesday or Wednesday night.  

NOTE: Anyone engaged in QA activities (not necessary to be QA core)  are 
welcome to join. 


[1] https://doodle.com/poll/68fudz937v22ghnv

-gmann





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-upstream-institute] Team lunch at the PTG next week - ACTION NEEDED

2018-09-09 Thread Ghanshyam Mann
I am in for Wed lunch meeting. 

-gmann

  On Sat, 08 Sep 2018 07:30:53 +0900 Ildiko Vancsa 
 wrote  
 > Hi Training Team,
 > 
 > As a couple of us will be at the PTG next week it would be great to get 
 > together one of the days maybe for lunch.
 > 
 > Wednesday would work the best for Kendall and me, but we can look into other 
 > days as well if it would not work for the majority of people around.
 > 
 > So my questions would be:
 > 
 > * Are you interested in getting together one of the lunch slots during next 
 > week?
 > 
 > * Would Wednesday work for you or do you have another preference?
 > 
 > Please drop a response to this thread and we will figure it out by Monday or 
 > early next week based on the responses.
 > 
 > Thanks,
 > Ildikó
 > (IRC: ildikov)
 > 
 > 
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][CI][nova compute] Skipping non-compute-driver tests

2018-09-09 Thread Ghanshyam Mann



  On Sat, 08 Sep 2018 08:28:06 +0900 Matt Riedemann  
wrote  
 > On 9/7/2018 10:25 AM, William M Edmonds wrote:
 > > The concern that I have with whitelisting in a given CI is that it has 
 > > to be done independently in every compute driver CI. So while I agree 
 > > that it won't be easy to maintain tagging for compute driver on the 
 > > tempest side, at least that's one place / easier than doing it in every 
 > > driver CI. When anyone figures out that an change is needed, all of the 
 > > CIs would benefit together if there is a shared solution.
 > 
 > How about storing the compute-driver specific whitelist in a common 
 > location? I'm not sure if that would be tempest, nova or somewhere else.

Yeah, Tempest would not fit as best location for such tagging or whitelist. I 
think nova may be better choice if nothing else.

 > 
 > -- 
 > 
 > Thanks,
 > 
 > Matt
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][CI][nova compute] Skipping non-compute-driver tests

2018-09-07 Thread Ghanshyam Mann



  On Fri, 07 Sep 2018 04:41:32 +0900 Eric Fried  wrote 
 
 > Jichen-
 > 
 > That patch is not ever intended to merge; hope that was clear from the
 > start :) It was just a proving ground to demonstrate which tests still
 > pass when there's effectively no compute driver in play.
 > 
 > We haven't taken any action on this from our end, though we have done a
 > little brainstorming about how we would tool our CI to skip those tests
 > most (but not all) of the time. Happy to share our experiences with you
 > if/as we move forward with that.
 > 
 > Regarding the tempest-level automation, I certainly had z in mind when
 > I was thinking about it. If you have the time and inclination to help
 > look into it, that would be most welcome.

Sorry for late response, somehow i missed this thread. As you mentioned and 
noticed in your patch that there are ~700 tests which does not touch compute 
driver. Most of them are from neutron-tempest-plugins or other service tests. 
From Tempest compute tests, many of them are negative tests or DB layer tests 
[1].

neutron-tempest-plugin or other service test you can always avoid to run with 
regex. And i do not think compute negative or DB test will take much time to 
run. But still if you want to avoid to run then, I think it is easy to maintain 
a whitelist regex file on CI side which can run only compute driver tests(61 in 
this case). 

Tagging compute driver on tempest side is little hard to maintain i feel and it 
can goes out of date very easily. If you have any other idea on that, we can 
surly talk in PTG on this. 

[1] 
http://184.172.12.213/66/599066/5/check/nova-powervm-out-of-tree-pvm/a1b42d5/powervm_os_ci.html.gz

 > 
 > Thanks,
 > efried
 > 
 > On 09/06/2018 12:33 AM, Chen CH Ji wrote:
 > > I see the patch is still ongoing status and do you have a follow up
 > > plan/discussion for that? we are maintaining 2 CIs (z/VM and KVM on z)
 > > so skip non-compute related cases will be a good for 3rd part CI .. thanks
 > > 
 > > Best Regards!
 > > 
 > > Kevin (Chen) Ji 纪 晨
 > > 
 > > Engineer, zVM Development, CSTL
 > > Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com
 > > Phone: +86-10-82451493
 > > Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian
 > > District, Beijing 100193, PRC
 > > 
 > > Inactive hide details for Eric Fried ---09/04/2018 09:35:09 PM---Folks-
 > > The other day, I posted an experimental patch [1] withEric Fried
 > > ---09/04/2018 09:35:09 PM---Folks- The other day, I posted an
 > > experimental patch [1] with an effectively
 > > 
 > > From: Eric Fried 
 > > To: "OpenStack Development Mailing List (not for usage questions)"
 > > 
 > > Date: 09/04/2018 09:35 PM
 > > Subject: [openstack-dev] [tempest][CI][nova compute] Skipping
 > > non-compute-driver tests
 > > 
 > > 
 > > 
 > > 
 > > 
 > > Folks-
 > > 
 > > The other day, I posted an experimental patch [1] with an effectively
 > > empty ComputeDriver (just enough to make n-cpu actually start) to see
 > > how much of our CI would pass. The theory being that any tests that
 > > still pass are tests that don't touch our compute driver, and are
 > > therefore not useful to run in our CI environment. Because anything that
 > > doesn't touch our code should already be well covered by generic
 > > dsvm-tempest CIs. The results [2] show that 707 tests still pass.
 > > 
 > > So I'm wondering whether there might be a way to mark tests as being
 > > "compute driver-specific" such that we could switch off all the other
 > > ones [3] via a one-line conf setting. Because surely this has potential
 > > to save a lot of CI resource not just for us but for other driver
 > > vendors, in tree and out.
 > > 
 > > Thanks,
 > > efried
 > > 
 > > [1] https://review.openstack.org/#/c/599066/
 > > [2]
 > > http://184.172.12.213/66/599066/5/check/nova-powervm-out-of-tree-pvm/a1b42d5/powervm_os_ci.html.gz
 > > [3] I get that there's still value in running all those tests. But it
 > > could be done like once every 10 or 50 or 100 runs instead of every time.
 > > 
 > > __
 > > OpenStack Development Mailing List (not for usage questions)
 > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > > 
 > > 
 > > 
 > > 
 > > 
 > > 
 > > __
 > > OpenStack Development Mailing List (not for usage questions)
 > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > > 
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: 

[openstack-dev] [qa] Canceling next week QA office hours due to PTG

2018-09-06 Thread Ghanshyam Mann
Hi All,

As many of the QA folks will be in PTG, I am canceling the QA office hours for 
next week. Same will be resumed after PTG on   20th Sept. 

-gmann





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-operator] [qa] [forum] [berlin] QA Brainstorming Topic ideas for Berlin 2018

2018-09-06 Thread Ghanshyam Mann
Hi All,

I have created the below etherpad to collect the forum ideas related to QA for 
Berlin Summit.

Please write up your ideas with your irc name on etherpad.

https://etherpad.openstack.org/p/berlin-stein-forum-qa-brainstorming 

-gmann





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [election][tc] TC Candidacy

2018-09-06 Thread Ghanshyam Mann
Hi All,

I’d like to announce my candidacy for OpenStack Technical Committee position. 

I am glad to work in OpenStack community and would like to thank all the 
contributors/leaders who supported me to explore new things which brings out my 
best for the community.

Let me introduce myself, briefly. I have joined the OpenStack community since 
2012 as operator and started as full time upstream contributor since 2014 
during mid of Ice-House release. Currently, I am PTL for the QA Program since 
the Rocky cycle and active contributor in QA projects and Nova. Also I have 
been randomly contributing in many other projects specially on Tempest plugins 
for bug fix and tempest compatibility changes. 
Along with that, I am actively involved in programs helping new contributors in 
OpenStack. 1. As mentor in the Upstream Institute Training since Barcelona 
Summit (Oct 2016)[1] 2. FirstContact SIG [2] to help new contributors to 
onboard in OpenStack. It's always a great experience to introduce OpenStack 
upstream workflow to new contributors and encourage them to start contribution. 
I feel that is very much needed in OpenStack because of current movement of 
experience contributors. 

TC direction has always been valuable and result oriented in technical debt or 
efforts towards Diversity of community. This kind of work/position never been 
easy task specially in such a big community like OpenStack. By observing the TC 
work from past couple of years, I am very much motivated to help in this 
direction in order to contribute more towards cross projects and collaboration 
among projects or people.   

Below are the areas I would like to Focus as TC:

* Share Project teams work for Common Goals:  Every cycle we have TC goals and 
some future direction where all the projects needs to start working. Projects 
try to do their best in that but big challenge for them is resource bandwidth. 
In Current situation, It is very hard for projects teams to accommodate those 
work by themselves. Project team are shrinking and key members are overloaded. 
My Idea is to form a temporary team of contributors under Goal champion and 
finish those common area work during starting of cycle (so that we can make 
sure to finish the work well on time and test throughout the cycle). That 
temporary team can be formed with volunteers from any project team or new part 
time contributors with help of OUI or FirstContact SIG etc. 
 
* Cross-project and cross-community testing: I would like to work more on 
collaboration of testing effort across projects and community. We have plugins 
approach for testing in OpenStack and I agree that which is not perfect at this 
stage. I would like to work on more collaboration and guidelines to improve 
that area.  From QA team point of view, I would like QA team to do more 
collaborative work for all the projects for their proper testing.  And further, 
extend the testing collaboration among adjacent communities. 

* Encourage new leaders:  new contributors and so new leaders are much required 
in community. Some internal or external leadership program etc can be very 
helpful.  

Regardless of the results of this election I will work hard towards above 
directions and help community at my best. 

Thank you for your reading and consideration.

- Ghanshyam Mann (gmann)

* Review:  
http://stackalytics.com/?release=all=marks_id=ghanshyammann_type=all
 
* Commit:   
http://stackalytics.com/?release=all=commits_id=ghanshyammann_type=all
 
* Foundation Profile: https://www.openstack.org/community/members/profile/6461 
* Website: https://ghanshyammann.com 
* IRC (Freenode): gmann

[1] https://wiki.openstack.org/wiki/OpenStack_Upstream_Institute_Occasions 
  https://wiki.openstack.org/wiki/OpenStack_Upstream_Institute 
[2] https://wiki.openstack.org/wiki/First_Contact_SIG 






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] [all] QA Stein PTG Planning

2018-09-05 Thread Ghanshyam Mann
Hi All,

As we are close to PTG, I have prepared the QA Stein PTG Schedule -
https://ethercalc.openstack.org/Stein-PTG-QA-Schedule 

Detail of each sessions can be found in this etherpad -
https://etherpad.openstack.org/p/qa-stein-ptg 

This time we will have QA Help Hour for 1 day only which is on Monday and next 
3 days for specific topic discussion and code sprint. 
We still have space for more sessions or topic if any of you would like to add. 
If so please write those to etherpad with your irc name.
Sessions Scheduled is flexible and we can reschedule based on request but do 
let me know before 7th Sept.

If anyone cannot travel to PTG and would like to attend remotely, do let me 
know i can plan something for remote participation. 

-gmann





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Rocky Retrospective Etherpad

2018-09-05 Thread Ghanshyam Mann
Hi All,

I have started an etherpad for a Rocky cycle retrospective for QA -
https://etherpad.openstack.org/p/qa-rocky-retrospective

This will be discussed in PTG on Tuesday 9.30-10.00 AM, so please add your
feedback/comment before that.

Everyone is welcome to add the feedback which help us to improve the
required things in next cycle.

-gmann







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [grenade][osc][rocky]openstack client Rocky does not work with python-cinderclient Rocky

2018-09-03 Thread Ghanshyam Mann



  On Mon, 03 Sep 2018 15:27:10 +0900 Ghanshyam Mann 
 wrote  
 > Hi All,
 > 
 > While doing the grenade setting to test the Rocky upgrade testing [1], i 
 > found osc Rocky version (3.15 or 3.16) does not work with 
 > python-cinderclient Rocky version (>=4.0.0) [2].
 > 
 > Failure are due to source_replica arg has been removed from 
 > python-cinderclient which went in Rocky release and osc fix of that went in 
 > after Rocky. 
 > 
 > Openstackclient Rocky version - 3.16.0
 > cinderclient Rocky version - 4.0.1
 > 
 > These 2 version does not work because cinderclient >=4.0.0 has removed the 
 > source_replica arg which is being taken care in openstackclient > 3.16 [2] 
 > so openastackclient rocky version (3.15 or 3.16 does not work with 
 > cinderclient rocky version)
 > 
 > We should backport the openstackclient fix [3] to Rocky and then release the 
 > osc version for Rocky. I have proposed the backport [4]. 
 > 
 > [1] https://review.openstack.org/#/c/591594
 > [2] 
 > http://logs.openstack.org/94/591594/2/check/neutron-grenade/b281347/logs/grenade.sh.txt.gz#_2018-09-03_01_29_36_289
 >  
 > [3] https://review.openstack.org/#/c/587005/
 > [4] https://review.openstack.org/#/c/599291/
 > 

This should be detected in osc Rocky patches but seems like 
osc-functional-devstack job does not run for stable/rocky zuul.yaml or tox.ini 
only changes [1]. I am not sure why osc-functional-devstack job did not run for 
below patches. I did not find irrelevant-files regex which exclude those file. 
We can see stable/queens run the functional job for similar changes [2].  Is 
something wrong in job selection on zuul side ?

[1]
https://review.openstack.org/#/c/594306/
https://review.openstack.org/#/c/586005/

[2] https://review.openstack.org/#/c/594302/ 

 > -gmann
 > 
 > 
 > 
 > 
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [grenade][osc][rocky]openstack client Rocky does not work with python-cinderclient Rocky

2018-09-03 Thread Ghanshyam Mann
Hi All,

While doing the grenade setting to test the Rocky upgrade testing [1], i found 
osc Rocky version (3.15 or 3.16) does not work with python-cinderclient Rocky 
version (>=4.0.0) [2].

Failure are due to source_replica arg has been removed from python-cinderclient 
which went in Rocky release and osc fix of that went in after Rocky. 

Openstackclient Rocky version - 3.16.0
cinderclient Rocky version - 4.0.1

These 2 version does not work because cinderclient >=4.0.0 has removed the 
source_replica arg which is being taken care in openstackclient > 3.16 [2] so 
openastackclient rocky version (3.15 or 3.16 does not work with cinderclient 
rocky version)

We should backport the openstackclient fix [3] to Rocky and then release the 
osc version for Rocky. I have proposed the backport [4]. 

[1] https://review.openstack.org/#/c/591594
[2] 
http://logs.openstack.org/94/591594/2/check/neutron-grenade/b281347/logs/grenade.sh.txt.gz#_2018-09-03_01_29_36_289
 
[3] https://review.openstack.org/#/c/587005/
[4] https://review.openstack.org/#/c/599291/

-gmann





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][qa][congress] trouble setting tempest feature flag

2018-08-31 Thread Ghanshyam Mann
  On Wed, 29 Aug 2018 08:20:37 +0900 Eric K  
wrote  
 > Ha. Turned out to be a simple mistake in hyphens vs underscores.

Thanks for update and good to know it is resolved now. Sorry I could not 
checked this further due to PTO.

-gmann

 > On Tue, Aug 28, 2018 at 3:06 PM Eric K  wrote:
 > >
 > > Any thoughts on what could be going wrong that the tempest tests still
 > > see the default conf values rather than those set here? Thanks lots!
 > >
 > > Here is the devstack log line showing the flags being set:
 > > http://logs.openstack.org/64/594564/4/check/congress-devstack-api-mysql/ce34264/logs/devstacklog.txt.gz#_2018-08-28_21_23_15_934
 > >
 > > On Wed, Aug 22, 2018 at 9:12 AM Eric K  wrote:
 > > >
 > > > Hi all,
 > > >
 > > > I have added feature flags for the congress tempest plugin [1] and set
 > > > them in the devstack plugin [2], but the flags seem to be ignored. The
 > > > tests are skipped [3] according to the default False flag rather than
 > > > run according to the True flag set in devstack plugin. Any hints on
 > > > what may be wrong? Thanks so much!
 > > >
 > > > [1] https://review.openstack.org/#/c/594747/3
 > > > [2] https://review.openstack.org/#/c/594793/1/devstack/plugin.sh
 > > > [3] 
 > > > http://logs.openstack.org/64/594564/3/check/congress-devstack-api-mysql/b2cd46f/logs/testr_results.html.gz
 > > > (the bottom two skipped tests were expected to run)
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa][patrole][neutron][policy] Neutron Policy Testing in OpenStack Patrole project

2018-08-22 Thread Ghanshyam Mann
Hi All,

This thread is to request the neutron team on review help for neutron policy 
testing in Patrole project.

Folks who are not familiar with Patrole, below is the  brief background & 
description of  Patrole:
-
OpenStack Patrole is official project under QA umbrella which perform the RBAC 
testing. It has been in development state since Ocata and currently released 
its 0.4.0 version for Rocky[1]. Complete Documentation can be found here[2]. 
#openstack-qa is IRC channel for Patrole. 

Main goal of this project is to perform the RBAC testing for OpenStack where we 
will first focus on  Nova, Cinder, Keystone, Glance and Neutron in Patrole repo 
and provide the framework / mechanism  to extend the testing for other project 
via plugin or some other way (yet to finalized). 

Current state :
- Good coverage for Nova, Keystone, Cinder, Glance.
- Ongoing 1. neutron coverage, framework stability
- Next 1. stable release of Patrole, 2. start gating the Patrole testing on 
project side.
--

Patrole team is working on neutron policy testing. As you know neutron policy 
is not as simple as other projects and also no user facing documentation for 
policy. I was discussing with amotoki about it and got to know that he is 
working on policy doc or something which can be useful for users and so does 
Patrole can consume that for writing the test cases.

Another request QA has for neutron team is about review the neutron policy test 
cases. Here is the complete review list[3] (cannot get the single gerrit topic 
linked with story#) and it will be great if neutron team can keep eyes on those 
and provide early feedback on new test cases (their policy name, return code, 
coverage etc). 

One example where we need feedback is - 
https://review.openstack.org/#/c/586739/ 

Q: What is the return code for GET API if policy authorization fail. 

From neutron doc [4] (though it is internal doc but it explain the neutron 
policy internals), it seems for GET, PUT, DELETE where resource existence is 
checked first. If resource does not exist then 404 is return for security 
purpose as 403 can tell invalid user that this resource exist. 

But for PUT and DELETE, it can be 403 when resource exist but user does not 
have access to PUT/DELETE operation. 

I was discussing it with amotoki also and we thought of
 - Check 404 for GET 
 - Check [403, 404] for PUT and DELETE.
 - later we will strict the checks of 404 and 403 separately for PUT and DELETE.

Let's us know if that is right way to proceed. 

[1] https://docs.openstack.org/releasenotes/patrole/v0.4.0.html  
[2] https://docs.openstack.org/patrole/latest/ 
[3] https://storyboard.openstack.org/#!/story/2002641
[4] 
https://docs.openstack.org/neutron/pike/contributor/internals/policy.html#request-authorization

-gmann



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][qa] QA Rocky release status

2018-08-21 Thread Ghanshyam Mann
Hi All,

Here is updated status on QA projects releases. only Devstack and Grenade left 
which are waiting for swift release - https://review.openstack.org/#/c/594537/ 

IN-PROGRESS: 

1. devstack: Branch. Patch is pushed to branch for Rocky which is in hold state 
- IN-PROGRESS [1]

2. grenade: Branch. Patch is pushed to branch for Rocky which is in hold state 
- IN-PROGRESS [1]


COMPLETED (Done or no release required): 

3. patrole: Release done, patch is under review[2] - COMPLETED

4. tempest: Release done, patch is under review[3] - COMPLETED

5. bashate: independent release | Branch-less. version 0.6.0 is released last 
month and no further release required in Rocky cycle. - COMPLETED

6. coverage2sql: Branch-less. Not any release yet and no specific release 
required for Rocky too. - COMPLETED 

7. devstack-plugin-ceph: Branch-less. Not any release yet and no specific 
release required for Rocky too. - COMPLETED 

8. devstack-plugin-cookiecutter: Branch-less. Not any release yet and no 
specific release required for Rocky. - COMPLETED 

9. devstack-tools: Branch-less. version 0.4.0 is the latest version released 
and no further release required in Rocky cycle. - COMPLETED

10. devstack-vagrant: Branch-less. Not any release yet and no specific release 
required for Rocky too. - COMPLETED 

11. eslint-config-openstack: Branch-less. version 4.0.1 is the latest version 
released. no further release required in Rocky cycle. - COMPLETED

12. hacking: Branch-less. version 11.1.0 is the latest version released. no 
further release required in Rocky cycle. - COMPLETED

13. karma-subunit-reporter: Branch-less. version v0.0.4 is the latest version 
released. no further release required in Rocky cycle. - COMPLETED

14. openstack-health: Branch-less. Not any release yet and no specific release 
required for Rocky too. - COMPLETED 

15. os-performance-tools: Branch-less. Not any release yet and no specific 
release required for Rocky too. - COMPLETED 

16. os-testr: Branch-less. version 1.0.0 is the latest version released. no 
further release required in Rocky cycle. - COMPLETED

17. qa-specs: Spec repo, no release needed. - COMPLETED

18. stackviz: Branch-less. Not any release yet and no specific release required 
for Rocky too. - COMPLETED 

19. tempest-plugin-cookiecutter: Branch-less. Not any release yet and no 
specific release required for Rocky too. - COMPLETED

20. tempest-lib: Deprecated repo, No released needed for Rocky - COMPLETED

21. tempest-stress: Branch-less. Not any release yet and no specific release 
required for Rocky too. - COMPLETED

22. devstack-plugin-container: Branch. Release and Branched done[4] - COMPLETED


[1] 
https://review.openstack.org/#/q/topic:rocky-branch-devstack-grenade+(status:open+OR+status:merged)
 
[2] https://review.openstack.org/#/c/592277/
[3] https://review.openstack.org/#/c/592276/
[4] https://review.openstack.org/#/c/591804/ 

-gmann 


  On Thu, 16 Aug 2018 17:55:12 +0900 Ghanshyam Mann 
 wrote  
 > Hi All,
 > 
 > QA has lot of sub-projects and this mail is to track their release status 
 > for Rocky cycle. I will be on vacation from coming  Monday for next 2 weeks 
 > (visiting India) but will be online to complete the below IN-PROGRESS items 
 > and update the status here.  
 > 
 > IN-PROGRESS: 
 > 
 > 1. devstack: Branch. Patch is pushed to branch for Rocky which is in 
 > hold state - IN-PROGRESS [1]
 > 
 > 2. grenade: Branch. Patch is pushed to branch for Rocky which is in hold 
 > state - IN-PROGRESS [1]
 > 
 > 3. patrole: Release done, patch is under review[2] - COMPLETED 
 > 
 > 4. tempest: Release done, patch is under review[3] - COMPLETED
 > 
 > COMPLETED (Done or no release required): 
 > 
 > 5. bashate: independent release | Branch-less.  version 0.6.0 is 
 > released last month and no further release required in Rocky cycle.  - 
 > COMPLETED
 > 
 > 6. coverage2sql: Branch-less.  Not any release yet and no specific 
 > release required for Rocky too. - COMPLETED 
 >   
 > 7. devstack-plugin-ceph: Branch-less. Not any release yet and no 
 > specific release required for Rocky too. - COMPLETED 
 > 
 > 8. devstack-plugin-cookiecutter: Branch-less. Not any release yet and no 
 > specific release required for Rocky. - COMPLETED 
 > 
 > 9. devstack-tools: Branch-less. version 0.4.0 is the latest version 
 > released and no further release required in Rocky cycle.  - COMPLETED
 > 
 > 10. devstack-vagrant: Branch-less.  Not any release yet and no specific 
 > release required for Rocky too. - COMPLETED 
 > 
 > 11. eslint-config-openstack: Branch-less. version 4.0.1 is the latest 
 > version released. no further release required in Rocky cycle.  - COMPLETED
 > 
 > 12. hacking: Branch-less. version 11.1.0 is the latest version released. 
 > no further rel

[openstack-dev] [release][qa] QA Rocky release status

2018-08-16 Thread Ghanshyam Mann
Hi All,

QA has lot of sub-projects and this mail is to track their release status for 
Rocky cycle. I will be on vacation from coming  Monday for next 2 weeks 
(visiting India) but will be online to complete the below IN-PROGRESS items and 
update the status here.  

IN-PROGRESS: 

1. devstack: Branch. Patch is pushed to branch for Rocky which is in hold 
state - IN-PROGRESS [1]

2. grenade: Branch. Patch is pushed to branch for Rocky which is in hold 
state - IN-PROGRESS [1]

3. patrole: Release done, patch is under review[2] - IN-PROGRESS

4. tempest: Release done, patch is under review[3] - IN-PROGRESS

COMPLETED (Done or no release required): 

5. bashate: independent release | Branch-less.  version 0.6.0 is released 
last month and no further release required in Rocky cycle.  - COMPLETED

6. coverage2sql: Branch-less.  Not any release yet and no specific release 
required for Rocky too. - COMPLETED 
  
7. devstack-plugin-ceph: Branch-less. Not any release yet and no specific 
release required for Rocky too. - COMPLETED 

8. devstack-plugin-cookiecutter: Branch-less. Not any release yet and no 
specific release required for Rocky. - COMPLETED 

9. devstack-tools: Branch-less. version 0.4.0 is the latest version 
released and no further release required in Rocky cycle.  - COMPLETED

10. devstack-vagrant: Branch-less.  Not any release yet and no specific 
release required for Rocky too. - COMPLETED 

11. eslint-config-openstack: Branch-less. version 4.0.1 is the latest 
version released. no further release required in Rocky cycle.  - COMPLETED

12. hacking: Branch-less. version 11.1.0 is the latest version released. no 
further release required in Rocky cycle.  - COMPLETED

13. karma-subunit-reporter: Branch-less. version v0.0.4 is the latest 
version released. no further release required in Rocky cycle.  - COMPLETED

14. openstack-health: Branch-less.  Not any release yet and no specific 
release required for Rocky too. - COMPLETED 

15. os-performance-tools: Branch-less.  Not any release yet and no specific 
release required for Rocky too. - COMPLETED 

16. os-testr: Branch-less. version 1.0.0 is the latest version released. no 
further release required in Rocky cycle.  - COMPLETED

17. qa-specs: Spec repo, no release needed. - COMPLETED

18. stackviz: Branch-less.  Not any release yet and no specific release 
required for Rocky too. - COMPLETED 

19. tempest-plugin-cookiecutter: Branch-less.  Not any release yet and no 
specific release required for Rocky too. - COMPLETED

20. tempest-lib: Deprecated repo, No released needed for Rocky - COMPLETED

21. tempest-stress: Branch-less.  Not any release yet and no specific 
release required for Rocky too. - COMPLETED

22. devstack-plugin-container: Branch. Release and Branched done[4] - 
COMPLETED


[1] 
https://review.openstack.org/#/q/topic:rocky-branch-devstack-grenade+(status:open+OR+status:merged)
 
[2] https://review.openstack.org/#/c/592277/
[3] https://review.openstack.org/#/c/592276/
[4] https://review.openstack.org/#/c/591804/ 

-gmann



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][qa][devstack][all] Pre-Notifiaction for DevStack branch cut for Rocky

2018-08-15 Thread Ghanshyam Mann
Hi All,

We are in process of cutting the Rocky branch for Devstack[1]. As per 
process[2], we need to wait for minimum set of project (which needed branch) 
used by Devstack to be branched first. As dhellmann mentioned on patch, All of 
the cycle-with-milestone projects are branched and we are waiting to hear go 
ahead from swift team. 

Other than Swift, if any other project needs more work or to be branched before 
we branched devstack, feel free to reply here or on gerrit patch. 

[1] https://review.openstack.org/#/c/591563/ 
[2] https://releases.openstack.org/reference/process.html#rc1

-gmann





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][qa][congress] help with tempest plugin jobs against stable/queens

2018-08-14 Thread Ghanshyam Mann
  On Wed, 15 Aug 2018 09:37:18 +0900 Eric K  
wrote  
 > I'm adding jobs [1] to the tempest plugin to run tests against
 > congress stable/queens. The job output seems to show stable/queens
 > getting checked out [2], but I know the test is *not* run against
 > queens because it's using features not available in queens. The
 > expected result is for several tests to fail as seen here [3]. All
 > hints and tips much appreciated!

You are doing it in right way by 'override-checkout: stable/queens'. And as log 
also show, congress is checkout from stable/queens. I tried to check the 
results but could not get what tests should fail and why. 

If you can give me more idea, i can debug that. 

-gmann

 > 
 > [1] https://review.openstack.org/#/c/591861/1
 > [2] 
 > http://logs.openstack.org/61/591861/1/check/congress-devstack-api-mysql-queens/f7b5752/job-output.txt.gz#_2018-08-14_22_30_36_899501
 > [3] https://review.openstack.org/#/c/591805/ (the depends-on is
 > irrelevant because that patch has been merged)
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][qa][congress] tempest test conditioning on release version

2018-08-14 Thread Ghanshyam Mann
  On Wed, 15 Aug 2018 06:40:57 +0900 Eric K  
wrote  
 > Anyone have an example handy of a tempest test conditioning on service
 > release version (because new features not available in past versions)?
 > Seems like it could get pretty messy and haphazard, so I'm curious to
 > see best practices. Thanks lots!

Thanks Eric for query. We do it in many times in Tempest and similar approach 
can be adopt by tempest plugins. There are 2 ways we can handle  this-

1. Using feature flag. Tempest documentation is here [1].
 Step1- This is simply adding a config options(feature flag) for new/old 
feature. 
 Example- https://review.openstack.org/#/c/545627/   
https://github.com/openstack/tempest/blob/6a8d495192632fd18dce4baf1a4b213f401a0167/tempest/config.py#L242
 Step2- Based on that flag you can skip the tests where that feature is not 
available. 
 Example-  
https://github.com/openstack/tempest/blob/d5058a8a9c8c1c5383699d04296087b6d5a24efd/tempest/api/identity/base.py#L315
 Step3- For gate, devstack plugin on project side (congress is your case [2]) 
which is branch aware can set that flag to true and false based on which branch 
that test is running. For tempest we do the same from devstack/lib/tempest
 Example - https://review.openstack.org/#/c/545680/
https://github.com/openstack-dev/devstack/blob/8c1052001629d62f001d04c182500fa293858f47/lib/tempest#L308
 Step4- For cloud testing(non-gate), tester can manually configure the those 
flag based on what service version they are testing. 

2. Detecting service version via version API
- If you can get the service version info from API then you can use that while 
skipping the tests.
- One example if for compute where based on microversion, it can be detected 
that test running against which release. 
- Example- 
https://github.com/openstack/tempest/blob/d5058a8a9c8c1c5383699d04296087b6d5a24efd/tempest/api/compute/base.py#L114


[1] 
https://docs.openstack.org/tempest/latest/HACKING.html#branchless-tempest-considerations
[2] 
https://github.com/openstack/congress/blob/014361c809517661264d0364eaf1e261e449ea80/devstack/plugin.sh#L88

 > 
 > Eric Kao
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [releases][rocky][tempest-plugins][ptl] Reminder to tag the Tempest plugins for Rocky release

2018-08-13 Thread Ghanshyam Mann



  On Mon, 13 Aug 2018 23:01:33 +0900 Doug Hellmann  
wrote  
 > Excerpts from Dmitry Tantsur's message of 2018-08-13 15:51:56 +0200:
 > > On 08/13/2018 03:46 PM, Doug Hellmann wrote:
 > > > Excerpts from Dmitry Tantsur's message of 2018-08-13 15:35:23 +0200:
 > > >> Hi,
 > > >>
 > > >> The plugins are branchless and should stay so. Let us not dive into 
 > > >> this madness
 > > >> again please.
 > > > 
 > > > You are correct that we do not want to branch, because we want the
 > > > same tests running against all branches of services in our CI system
 > > > to help us avoid (or at least recognize) API-breaking changes across
 > > > release boundaries.
 > > 
 > > Okay, thank you for clarification. I stand corrected and apologize if my 
 > > frustration was expressed too loudly or harshly :)
 > 
 > Not at all. This is new territory, and we made a decision somewhat
 > quickly, so I am not surprised that we need to do a little more work to
 > communicate the results.
 > 
 > > 
 > > > 
 > > > We *do* need to tag so that people consuming the plugins to certify
 > > > their clouds know which version of the plugin works with the version
 > > > of the software they are installing. Newer versions of plugins may
 > > > rely on features or changes in newer versions of tempest, or other
 > > > dependencies, that are not available in an environment that is
 > > > running an older cloud.
 > > 
 > > ++
 > > 
 > > > 
 > > > We will apply those tags in the series-specific deliverable files in
 > > > openstack/releases so that the version numbers appear together on
 > > > releases.openstack.org on the relevant release page so that users
 > > > looking for the "rocky" version of a plugin can find it easily.
 > > 
 > > Okay, this makes sense now.
 > 
 > Good.
 > 
 > Now, we just need someone to figure out where to write all of that down
 > so we don't have to have the same conversation next cycle. :-)

+1, this is very imp. I was discussing the same with amotoki today on QA 
channel. I have added a TODO for me to write the 1. "How Plugins should cover 
the stable branch testing with branchless repo" now i can add 2nd TODO also 2. 
"Release model & tagging clarification of Tempest  Plugins". I do not know the 
best common place to add those doc but as start i can write those in Tempest 
doc and later we can refer/move the same on Plugins side too. 

I have added this TODO on qa stein ptg etherpad also for reminder/feedback- 
https://etherpad.openstack.org/p/qa-stein-ptg

-gmann

 > 
 > Doug
 > 
 > > 
 > > > 
 > > > Doug
 > > > 
 > > >>
 > > >> Dmitry
 > > >>
 > > >> On 08/12/2018 10:41 AM, Ghanshyam Mann wrote:
 > > >>> Hi All,
 > > >>>
 > > >>> Rocky release is few weeks away and we all agreed to release Tempest 
 > > >>> plugin with cycle-with-intermediary. Detail discussion are in ML [1] 
 > > >>> in case you missed.
 > > >>>
 > > >>> This is reminder to tag your project tempest plugins for Rocky 
 > > >>> release. You should be able to find your plugins deliverable file 
 > > >>> under rocky folder in releases repo[3].  You can refer 
 > > >>> cinder-tempest-plugin release as example.
 > > >>>
 > > >>> Feel free to reach to release/QA team for any help/query.
 > > >>
 > > >> Please make up your mind. Please. Please. Please.
 > > >>
 > > >>>
 > > >>> [1] 
 > > >>> http://lists.openstack.org/pipermail/openstack-dev/2018-June/131810.html
 > > >>> [2] https://review.openstack.org/#/c/590025/
 > > >>> [3] 
 > > >>> https://github.com/openstack/releases/tree/master/deliverables/rocky
 > > >>>
 > > >>> -gmann
 > > >>>
 > > >>>
 > > >>>
 > > >>> __
 > > >>> OpenStack Development Mailing List (not for usage questions)
 > > >>> Unsubscribe: 
 > > >>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > > >>>
 > > >>
 > > > 
 > > > __
 > > > OpenStack Development Mailing List (not for usage questions)
 > > > Unsubscribe: 
 > > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > > > 
 > > 
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [releases][rocky][tempest-plugins][ptl] Reminder to tag the Tempest plugins for Rocky release

2018-08-13 Thread Ghanshyam Mann



  On Mon, 13 Aug 2018 22:46:42 +0900 Doug Hellmann  
wrote  
 > Excerpts from Dmitry Tantsur's message of 2018-08-13 15:35:23 +0200:
 > > Hi,
 > > 
 > > The plugins are branchless and should stay so. Let us not dive into this 
 > > madness 
 > > again please.
 > 
 > You are correct that we do not want to branch, because we want the
 > same tests running against all branches of services in our CI system
 > to help us avoid (or at least recognize) API-breaking changes across
 > release boundaries.
 > 
 > We *do* need to tag so that people consuming the plugins to certify
 > their clouds know which version of the plugin works with the version
 > of the software they are installing. Newer versions of plugins may
 > rely on features or changes in newer versions of tempest, or other
 > dependencies, that are not available in an environment that is
 > running an older cloud.
 > 
 > We will apply those tags in the series-specific deliverable files in
 > openstack/releases so that the version numbers appear together on
 > releases.openstack.org on the relevant release page so that users
 > looking for the "rocky" version of a plugin can find it easily.

Thanks Doug for clarifying it again :). Details can be found on original ML[1] 
also about goal behind tagging the plugins. Next item pending on branchless 
testing is to setup the Plugin CI jobs for stable branches also like Tempest 
does. That is one item for QA team to help plugins in stein. 

[1] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131810.html

-gmann
> 
 > Doug
 > 
 > > 
 > > Dmitry
 > > 
 > > On 08/12/2018 10:41 AM, Ghanshyam Mann wrote:
 > > > Hi All,
 > > > 
 > > > Rocky release is few weeks away and we all agreed to release Tempest 
 > > > plugin with cycle-with-intermediary. Detail discussion are in ML [1] in 
 > > > case you missed.
 > > > 
 > > > This is reminder to tag your project tempest plugins for Rocky release. 
 > > > You should be able to find your plugins deliverable file under rocky 
 > > > folder in releases repo[3].  You can refer cinder-tempest-plugin release 
 > > > as example.
 > > > 
 > > > Feel free to reach to release/QA team for any help/query.
 > > 
 > > Please make up your mind. Please. Please. Please.
 > > 
 > > > 
 > > > [1] 
 > > > http://lists.openstack.org/pipermail/openstack-dev/2018-June/131810.html
 > > > [2] https://review.openstack.org/#/c/590025/
 > > > [3] https://github.com/openstack/releases/tree/master/deliverables/rocky
 > > > 
 > > > -gmann
 > > > 
 > > > 
 > > > 
 > > > __
 > > > OpenStack Development Mailing List (not for usage questions)
 > > > Unsubscribe: 
 > > > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > > > 
 > > 
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [releases][rocky][tempest-plugins][ptl] Reminder to tag the Tempest plugins for Rocky release

2018-08-13 Thread Ghanshyam Mann
  On Mon, 13 Aug 2018 22:35:23 +0900 Dmitry Tantsur  
wrote  
 > Hi,
 > 
 > The plugins are branchless and should stay so. Let us not dive into this 
 > madness 
 > again please.
 > 
 > Dmitry
 > 
 > On 08/12/2018 10:41 AM, Ghanshyam Mann wrote:
 > > Hi All,
 > > 
 > > Rocky release is few weeks away and we all agreed to release Tempest 
 > > plugin with cycle-with-intermediary. Detail discussion are in ML [1] in 
 > > case you missed.
 > > 
 > > This is reminder to tag your project tempest plugins for Rocky release. 
 > > You should be able to find your plugins deliverable file under rocky 
 > > folder in releases repo[3].  You can refer cinder-tempest-plugin release 
 > > as example.
 > > 
 > > Feel free to reach to release/QA team for any help/query.
 > 
 > Please make up your mind. Please. Please. Please.

Not sure why it is being understood as to cut the branch for plugins. This 
thread is just to remind plugins owner to tag the plugins for Rocky release.  
'cycle-with-intermediary' does not need to be cut the branch always, for 
plugins and tempest it is just to release a tag for current OpenStack release. 

-gmann

 > 
 > > 
 > > [1] 
 > > http://lists.openstack.org/pipermail/openstack-dev/2018-June/131810.html
 > > [2] https://review.openstack.org/#/c/590025/
 > > [3] https://github.com/openstack/releases/tree/master/deliverables/rocky
 > > 
 > > -gmann
 > > 
 > > 
 > > 
 > > __
 > > OpenStack Development Mailing List (not for usage questions)
 > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > > 
 > 
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [releases][rocky][tempest-plugins][ptl] Reminder to tag the Tempest plugins for Rocky release

2018-08-12 Thread Ghanshyam Mann
Hi All,

Rocky release is few weeks away and we all agreed to release Tempest plugin 
with cycle-with-intermediary. Detail discussion are in ML [1] in case you 
missed.

This is reminder to tag your project tempest plugins for Rocky release. You 
should be able to find your plugins deliverable file under rocky folder in 
releases repo[3].  You can refer cinder-tempest-plugin release as example. 

Feel free to reach to release/QA team for any help/query. 

[1] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131810.html  
[2] https://review.openstack.org/#/c/590025/   
[3] https://github.com/openstack/releases/tree/master/deliverables/rocky

-gmann



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] API updates week 02-08

2018-08-09 Thread Ghanshyam Mann
Hi All, 

Please find the Nova API highlights of this week. 

Weekly Office Hour: 
=== 

What we discussed this week: 
- Discussed on granular policy spec to update that as default roles are present 
now.

- Discussed keypair quota usage bug. and only doc update can be done for now. 
Patch is up for this https://review.openstack.org/#/c/590081/ 

- Discussed about simple-tenant-usage bug about value error. We need to handle 
500 error for non iso8601 time format input.  Bug was reported on Pike but due 
to env issue as author confirmed. I also tried this on master and not 
reproducible.  Anyways we need to handle the 500 in this API. I will push patch 
for that.  : https://bugs.launchpad.net/nova/+bug/1783338 

Planned Features : 
== 
Below are the API related features which did not make to Rocky and need to 
propose for Stein. Not much progress to share on these as of now. 

1. Servers Ips non-unique network names : 
- 
https://blueprints.launchpad.net/nova/+spec/servers-ips-non-unique-network-names
 
- Spec Merged 
- 
https://review.openstack.org/#/q/topic:bp/servers-ips-non-unique-network-names+(status:open+OR+status:merged)
 
- Weekly Progress: No progress. Need to open for stein

2. Volume multiattach enhancements: 
- https://blueprints.launchpad.net/nova/+spec/volume-multiattach-enhancements 
- 
https://review.openstack.org/#/q/topic:bp/volume-multiattach-enhancements+(status:open+OR+status:merged)
 
- Weekly Progress: No progress. 

3. API Extensions merge work 
- https://blueprints.launchpad.net/nova/+spec/api-extensions-merge-stein
- 
https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:bp/api-extensions-merge-stein
 
- Weekly Progress: Done for Rocky, and new BP open for remaining work on this 
BP. I will remove the deprecated extensions policy first which will be more 
clean. 

4. Handling a down cell 
- https://blueprints.launchpad.net/nova/+spec/handling-down-cell 
- 
https://review.openstack.org/#/q/topic:bp/handling-down-cell+(status:open+OR+status:merged)
 
- Weekly Progress: No progress. Need to open for stein

Bugs: 
 
This week Bug Progress: 
https://etherpad.openstack.org/p/nova-api-weekly-bug-report 

Critical: 0->0 
High importance: 2->1 
By Status: 
New: 1->0 
Confirmed/Triage: 30-> 32 
In-progress: 34->32
Incomplete: 4->4 
= 
Total: 68->68 

NOTE- There might be some bug which are not tagged as 'api' or 'api-ref', those 
are not in above list. Tag such bugs so that we can keep our eyes. 


-gmann 





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Stepping down as coordinator for the Outreachy internships

2018-08-08 Thread Ghanshyam Mann
Thanks Victoria  for such a great work and well coordinated. You have done 
remarkable work in internships program. 

-gmann 

  On Wed, 08 Aug 2018 08:47:28 +0900 Victoria Martínez de la Cruz 
 wrote  
 > Hi all,
 > I'm reaching you out to let you know that I'll be stepping down as 
 > coordinator for OpenStack next round. I had been contributing to this effort 
 > for several rounds now and I believe is a good moment for somebody else to 
 > take the lead. You all know how important is Outreachy to me and I'm 
 > grateful for all the amazing things I've done as part of the Outreachy 
 > program and all the great people I've met in the way. I plan to keep 
 > involved with the internships but leave the coordination tasks to somebody 
 > else.
 > If you are interested in becoming an Outreachy coordinator, let me know and 
 > I can share my experience and provide some guidance.
 > Thanks,
 > Victoria 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][api] strict schema validation and microversioning

2018-08-07 Thread Ghanshyam Mann



  On Wed, 08 Aug 2018 07:27:06 +0900 Monty Taylor  
wrote  
 > On 08/07/2018 05:03 PM, Akihiro Motoki wrote:
 > > Hi Cinder and API-SIG folks,
 > > 
 > > During reviewing a horizon bug [0], I noticed the behavior of Cinder API 
 > > 3.0 was changed.
 > > Cinder introduced more strict schema validation for creating/updating 
 > > volume encryption type
 > > during Rocky and a new micro version 3.53 was introduced[1].
 > > 
 > > Previously, Cinder API like 3.0 accepts unused fields in POST requests
 > > but after [1] landed unused fields are now rejected even when Cinder API 
 > > 3.0 is used.
 > > In my understanding on the microversioning, the existing behavior for 
 > > older versions should be kept.
 > > Is it correct?
 > 
 > I agree with your assessment that 3.0 was used there - and also that I 
 > would expect the api validation to only change if 3.53 microversion was 
 > used.

+1. As you know, neutron also implemented strict validation in Rocky but with 
discovery via config option and extensions mechanism. Same way Cinder should 
make it with backward compatible way till 3.53 version. 

-gmann 

 > 
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [opensatck-dev][qa][barbican][novajoin][networking-fortinet][vmware-nsx] Dependency of Tempest changes

2018-08-06 Thread Ghanshyam Mann
Hi All,

Tempest patch [1] removes the deprecated config option for volume v1 API and it 
has dependency on may plugins. I have proposed the patches to each plugins 
using that option [2] to stop using that option so that their gate will not be 
broken if Tempest patch merge. Also I have made Tempest patch dependency on 
each plugins commit. Many of those dependent patch has merged but 4 patches are 
still hanging around  since long time which is blocking Tempest change to get 
merge. 

 Below are the plugins which have not merged the changes:
   barbican-tempest-plugin - https://review.openstack.org/#/c/573174/ 
   novajoin-tempest-plugin - https://review.openstack.org/#/c/573175/ 
   networking-fortinet - https://review.openstack.org/#/c/573170/   
   vmware-nsx-tempest-plugin - https://review.openstack.org/#/c/573172/ 

I want to merge this tempest patch in Rocky release which I am planing to do in 
next week. To make that happen we have to merge the Tempest patch soon. If 
above patches are not merged by plugins team within 2-3 days which means those 
plugins might not be active or do not care for gate, I am going to remove their 
dependency on Tempest patch and merge that.

[1] https://review.openstack.org/#/c/573135/ 
[2] 
https://review.openstack.org/#/q/topic:remove-support-of-cinder-v1-api+(status:open+OR+status:merged)

-gmann





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Should we add a tempest-slow job?

2018-08-06 Thread Ghanshyam Mann



  On Fri, 27 Jul 2018 00:14:04 +0900 Matt Riedemann  
wrote  
 > On 5/13/2018 9:06 PM, Ghanshyam Mann wrote:
 > >> +1 on idea. As of now slow marked tests are from nova, cinder and
 > >> neutron scenario tests and 2 API swift tests only [4]. I agree that
 > >> making a generic job in tempest is better for maintainability. We can
 > >> use existing job for that with below modification-
 > >> -  We can migrate
 > >> "legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend" job
 > >> zuulv3 in tempest repo
 > >> -  We can see if we can move migration tests out of it and use
 > >> "nova-live-migration" job (in tempest check pipeline ) which is much
 > >> better in live migration env setup and controlled by nova.
 > >> -  then it can be name something like
 > >> "tempest-scenario-multinode-lvm-multibackend".
 > >> -  run this job in nova, cinder, neutron check pipeline instead of 
 > >> experimental.
 > > Like this 
 > > -https://review.openstack.org/#/q/status:open+project:openstack/tempest+branch:master+topic:scenario-tests-job
 > > 
 > > That makes scenario job as generic with running all scenario tests
 > > including slow tests with concurrency 2. I made few cleanup and moved
 > > live migration tests out of it which is being run by
 > > 'nova-live-migration' job. Last patch making this job as voting on
 > > tempest side.
 > > 
 > > If looks good, we can use this to run on project side pipeline as voting.
 > > 
 > > -gmann
 > > 
 > 
 > I should have said something earlier, but I've said it on my original 
 > nova change now:
 > 
 > https://review.openstack.org/#/c/567697/
 > 
 > What was implemented in Tempest isn't really at all what I was going 
 > for, especially since it doesn't run the API tests marked 'slow'. All I 
 > want is a job like tempest-full (which excludes slow tests) to be 
 > tempest-full which *only* runs slow tests. They would run a mutually 
 > exclusive set of tests so we have that coverage. I don't care if the 
 > scenario tests are run in parallel or serial (it's probably best to 
 > start in serial like tempest-full today and then change to parallel 
 > later if that settles down).
 > 
 > But I think it's especially important given:
 > 
 > https://review.openstack.org/#/c/567697/2
 > 
 > That we have a job which only runs slow tests because we're going to be 
 > marking more tests as "slow" pretty soon and we don't need the overlap 
 > with the existing tests that are run in tempest-full.

Agree with your point. We have tempest-slow job now available on tempest side 
to use across projects[1].

I have updated this - https://review.openstack.org/#/c/567697


[1] 
https://github.com/openstack/tempest/blob/b2b666bd4b9aab08d0b7724c1f0b7465adde0d8d/.zuul.yaml#L146

-gmann

 > 
 > --
 > Thanks,
 > 
 > Matt
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] tempest-full-py3 rename means we now run that job on test-only changes

2018-08-06 Thread Ghanshyam Mann

  On Sun, 05 Aug 2018 06:44:26 +0900 Matt Riedemann  
wrote  
 > I've reported a nova bug for this:
 > 
 > https://bugs.launchpad.net/nova/+bug/1785425
 > 
 > But I'm not sure what is the best way to fix it now with the zuul v3 
 > hotness. We had an irrelevant-files entry in project-config for the 
 > tempest-full job but we don't have that for tempest-full-py3, so should 
 > we just rename that in project-config (guessing not)? Or should we do 
 > something in nova's .zuul.yaml like this (guessing yes):
 > 
 > https://review.openstack.org/#/c/578878/
 > 
 > The former is easy and branchless but I'm guessing the latter is what we 
 > should do long-term (and would require backports to stable branches).

Yeah, tempest-full-py3 does not have nova specific irreverent-file defined on 
project-config side. 

Just for background, same issue was for other job also like tempest-full and 
grenade job where tempest-full used to run on doc/test only changes also [1]  
which is fixed after making the 'files' and 'irrelevant-files' overridable in 
zuul [2].

IMO same solution can be done for tempest-full-py3 too, I pushed the patch for 
that [3]. For new job, i feel we should always plan to do it in nova's 
.zuul.yaml and old entry on project-config side can be move to nova side during 
job migration work. 


[1] https://bugs.launchpad.net/nova/+bug/1745405  
https://bugs.launchpad.net/nova/+bug/1745431
[2] http://lists.openstack.org/pipermail/openstack-dev/2018-June/131304.html
[3] https://review.openstack.org/#/c/589039/

-gmann

 > 
 > -- 
 > 
 > Thanks,
 > 
 > Matt
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] keypair quota usage info for user

2018-07-30 Thread Ghanshyam Mann



  On Sat, 28 Jul 2018 04:21:53 +0900 Matt Riedemann  
wrote  
 > On 7/27/2018 2:14 PM, Matt Riedemann wrote:
 > >>  From checking the history and review discussion on [3], it seems that 
 > >> it was like that from staring. key_pair quota is being counted when 
 > >> actually creating the keypair but it is not shown in API 'in_use' field.
 > > 
 > > Just so I'm clear which API we're talking about, you mean there is no 
 > > totalKeypairsUsed entry in 
 > > https://developer.openstack.org/api-ref/compute/#show-rate-and-absolute-limits
 > >  
 > > correct?
 > 
 > Nevermind I see it now:
 > 
 > https://developer.openstack.org/api-ref/compute/#show-the-detail-of-quota

Yeah, 'is_use' field under 'keypair' of this API. 

 > 
 > We have too many quota-related APIs.
 > 
 > -- 
 > 
 > Thanks,
 > 
 > Matt
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA][PTL][Election] Quality Assurance PTL Candidacy for Stein

2018-07-26 Thread Ghanshyam Mann
Hi Everyone,

I would like to announce my candidacy to continue the Quality Assurance PTL 
role for Stein cycle.

I have served as QA PTL in Rocky cycle and as first time being PTL role, it was 
great experience for me. I have been doing my best effort in Rocky and made 
sure that we continued serving the QA responsibility in better way  and also 
Improving the many things in QA like new feature test coverage, docs, Tracking 
Process etc.

In Rocky, QA team has successfully executed many of the targeted working items. 
Few items and things went well are as below:-

* Zuul v3 migration and base job available for cross project use. 
* Running volume v3 API as default in gate testing. Along with that running a 
single job for v2 API for compatibility checks. 
* Tempest plugins release process to map with Tempest releases. 
* Improving the test coverage and service clients.
* Releasing sub project like hacking and fix the version issues,  projects were 
facing on every hacking release. 
* Completing compute microversion response schema gaps in Tempest.
* Finishing more and more work in Patrole to make it towards stable release 
like documentation, more coverage etc. 
* We are able to continue serving in good term irrespective of resource 
shortage in QA.
* Supporting projects for testing and fixes  to continue their development. 

Apart from above accomplishment, there are still a lot of improvements needed 
(listed below) and I will try my best to execute the same in next Stein cycle.

* Tempest CLI unit test coverage and switching gate job to use all of them. 
This will help to avoid regression in CLI.
* Tempest scenario manage refactoring which is still in messy state and hard to 
debug. 
* no progress on QA SIG which will help us to share/consume the QA tooling 
across communities. 
* no progress on Destructive testing (Eris) projects. 
* Plugins cleanup to improve the QA interface usage. 
* Bug Triage, Our targets was to continue the New bugs count as low which did 
not went well in Rocky. 

All the momentum and activities rolling are motivating me to continue another 
term as QA PTL in order to explore and showcase more challenges. Along with 
that let me summarize my goals and focus area for Stein cycle:

* Continue working on backlogs from above list and finish them based on 
priority.
* Help the Projects' developments with test writing/improvement and gate 
stability
* Plugin improvement and helping them on everything they need from QA. This 
area need more process and collaboration with plugins team. 
* Try best to have progress on Eris project.  
* Start QA SIG to help cross community collaboration.  
* Bring on more contributor and core reviewers.

Thanks for reading and consideration my candidacy for Stein cycle.

-gmann







__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Lots of slow tests timing out jobs

2018-07-25 Thread Ghanshyam Mann



  On Wed, 25 Jul 2018 22:22:24 +0900 Matt Riedemann  
wrote  
 > On 7/25/2018 1:46 AM, Ghanshyam Mann wrote:
 > > yeah, there are many tests taking too long time. I do not know the reason 
 > > this time but last time we did audit for slow tests was mainly due to ssh 
 > > failure.
 > > I have created the similar ethercalc [3] to collect time taking tests and 
 > > then round figure of their avg time taken since last 14 days from health 
 > > dashboard. Yes, there is no calculated avg time on o-h so I did not take 
 > > exact avg time its round figure.
 > > 
 > > May be 14 days  is too less to take decision to mark them slow but i think 
 > > their avg time since 3 months will be same. should we consider 3 month 
 > > time period for those ?
 > > 
 > > As per avg time, I have voted (currently based on 14 days avg) on 
 > > ethercalc which all test to mark as slow. I taken the criteria of >120 sec 
 > > avg time.  Once we have more and more people votes there we can mark them 
 > > slow.
 > > 
 > > [3]https://ethercalc.openstack.org/dorupfz6s9qt
 > 
 > Thanks for this. I haven't gone through all of the tests in there yet, 
 > but noticed (yesterday) a couple of them were personality file compute 
 > API tests, which I thought was strange. Do we have any idea where the 
 > time is being spent there? I assume it must be something with ssh 
 > validation to try and read injected files off the guest. I need to dig 
 > into this one a bit more because by default, file injection is disabled 
 > in the libvirt driver so I'm not even sure how these are running (or 
 > really doing anything useful). 

That is set to True explicitly in tempest-full job [1] and then devstack set it 
True on nova. 

>Given we have deprecated personality 
 > files in the compute API [1] I would definitely mark those as slow tests 
 > so we can still run them but don't care about them as much.

Make sense, +1.


[1] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n56

-gmann
 > 
 > [1] 
 > https://docs.openstack.org/nova/latest/reference/api-microversion-history.html#id52
 > 
 > -- 
 > 
 > Thanks,
 > 
 > Matt
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] API updates week 19-25

2018-07-25 Thread Ghanshyam Mann



  On Wed, 25 Jul 2018 23:53:18 +0900 Surya Seetharaman 
 wrote  
 > Hi!
 > On Wed, Jul 25, 2018 at 11:53 AM, Ghanshyam Mann  
 > wrote:
 > 
 >  5. API Extensions merge work 
 >  - https://blueprints.launchpad.net/nova/+spec/api-extensions-merge-rocky 
 >  - 
 > https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:bp/api-extensions-merge-rocky
 >  
 >  - Weekly Progress: part-1 of schema merge and part-2 of server_create merge 
 > has been merged for Rocky. 1 last patch of removing the placeholder method 
 > are on gate.
 >  part-3 of view builder merge 
 > cannot make it to Rocky (7 patch up for review + 5 more to push)< Postponed 
 > this work to Stein.
 >  
 >  6. Handling a down cell 
 >  - https://blueprints.launchpad.net/nova/+spec/handling-down-cell 
 >  - 
 > https://review.openstack.org/#/q/topic:bp/handling-down-cell+(status:open+OR+status:merged)
 >  
 >  - Weekly Progress: It is difficult to make it in Rocky? matt has open 
 > comment on patch about changing the service list along with server list in 
 > single microversion which make 
 > sense. 
 > 
 > 
 > ​The handling down cell spec related API changes will also be postponed to 
 > Stein since the view builder merge (part-3 of API Extensions merge work)​ is 
 > postponed to Stein. It would be more cleaner.

Yeah, I will make sure view builder things gets in early in stein. I am going 
to push all remaining patches and make them ready for review once we have stein 
branch. 

-gmann

 > -- 
 > 
 > Regards,
 > Surya.
 >   __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] API updates week 19-25

2018-07-25 Thread Ghanshyam Mann
Hi All, 

Please find the Nova API highlights of this week. 

Weekly Office Hour: 
=== 

What we discussed this week: 
- Discussion on priority BP and remaining reviews on those. 
- Discussed keypair quota usage bug. 

Planned Features : 
== 
Below are the API related features for Rocky cycle. Nova API Sub team will 
start reviewing those to give their regular feedback. If anythings missing 
there feel free to add those in etherpad- 
https://etherpad.openstack.org/p/rocky-nova-priorities-tracking 

1. Servers Ips non-unique network names : 
- 
https://blueprints.launchpad.net/nova/+spec/servers-ips-non-unique-network-names
 
- Spec Merged 
- 
https://review.openstack.org/#/q/topic:bp/servers-ips-non-unique-network-names+(status:open+OR+status:merged)
 
- Weekly Progress: I did not start this due to other work. This cannot make in 
Rocky and will plan for Stein early. 

2. Abort live migration in queued state: 
- 
https://blueprints.launchpad.net/nova/+spec/abort-live-migration-in-queued-status
 
- 
https://review.openstack.org/#/q/topic:bp/abort-live-migration-in-queued-status+(status:open+OR+status:merged)
 
- Weekly Progress: COMPLETED

3. Complex anti-affinity policies: 
- https://blueprints.launchpad.net/nova/+spec/complex-anti-affinity-policies 
- 
https://review.openstack.org/#/q/topic:bp/complex-anti-affinity-policies+(status:open+OR+status:merged)
 
- Weekly Progress: COMPLETED

4. Volume multiattach enhancements: 
- https://blueprints.launchpad.net/nova/+spec/volume-multiattach-enhancements 
- 
https://review.openstack.org/#/q/topic:bp/volume-multiattach-enhancements+(status:open+OR+status:merged)
 
- Weekly Progress: No progress. 

5. API Extensions merge work 
- https://blueprints.launchpad.net/nova/+spec/api-extensions-merge-rocky 
- 
https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:bp/api-extensions-merge-rocky
 
- Weekly Progress: part-1 of schema merge and part-2 of server_create merge has 
been merged for Rocky. 1 last patch of removing the placeholder method are on 
gate.
part-3 of view builder merge cannot 
make it to Rocky (7 patch up for review + 5 more to push)< Postponed this work 
to Stein.

6. Handling a down cell 
- https://blueprints.launchpad.net/nova/+spec/handling-down-cell 
- 
https://review.openstack.org/#/q/topic:bp/handling-down-cell+(status:open+OR+status:merged)
 
- Weekly Progress: It is difficult to make it in Rocky? matt has open comment 
on patch about changing the service list along with server list in single 
microversion which make 
   sense. 

Bugs: 
 
Discussed about keypair quota bug. Sent separate mailing list for more 
feedback[1]

This week Bug Progress:   
https://etherpad.openstack.org/p/nova-api-weekly-bug-report 

Critical: 0->0 
High importance: 3->2
By Status: 
New: 0->0 
Confirmed/Triage: 29-> 30 
In-progress: 36->34
Incomplete: 4->4 
= 
Total: 69->68

NOTE- There might be some bug which are not tagged as 'api' or 'api-ref', those 
are not in above list. Tag such bugs so that we can keep our eyes. 


[1] http://lists.openstack.org/pipermail/openstack-dev/2018-July/132459.html

-gmann 






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] keypair quota usage info for user

2018-07-25 Thread Ghanshyam Mann
Hi All,

During today API office hour, we were discussing about keypair quota usage bug 
(newton) [1]. key_pair 'in_use' quota is always 0 even when request per user 
which is because it is being set as 0 always [2].

From checking the history and review discussion on [3], it seems that it was 
like that from staring. key_pair quota is being counted when actually creating 
the keypair but it is not shown in API 'in_use' field. Vishakha (assignee of 
this bug) is currently planing to work on this bug and before that we have few 
queries:

1. is it ok to show the keypair used info via API ? any original rational not 
to do so or it was just like that from starting.  

2. Because this change will show the keypair used quota information in API's 
existing filed 'in_use', it is API behaviour change (not interface signature 
change in backward incompatible way) which can cause interop issue. Should we 
bump microversion for this change? 

[1] https://bugs.launchpad.net/nova/+bug/1644457 
[2] 
https://github.com/openstack/nova/blob/bf497cc47497d3a5603bf60de652054ac5ae1993/nova/quota.py#L189
 
[3] https://review.openstack.org/#/c/446239/

-gmann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Lots of slow tests timing out jobs

2018-07-25 Thread Ghanshyam Mann
  On Wed, 25 Jul 2018 05:15:53 +0900 Matt Riedemann  
wrote  
 > While going through our uncategorized gate failures [1] I found that we 
 > have a lot of jobs failing (161 in 7 days) due to the tempest run timing 
 > out [2]. I originally thought it was just the networking scenario tests, 
 > but I was able to identify a handful of API tests that are also taking 
 > nearly 3 minutes each, which seems like they should be moved to scenario 
 > tests and/or marked slow so they can be run in a dedicated tempest-slow job.
 > 
 > I'm not sure how to get the history on the longest-running tests on 
 > average to determine where to start drilling down on the worst 
 > offenders, but it seems like an audit is in order.

yeah, there are many tests taking too long time. I do not know the reason this 
time but last time we did audit for slow tests was mainly due to ssh failure. 
I have created the similar ethercalc [3] to collect time taking tests and then 
round figure of their avg time taken since last 14 days from health dashboard. 
Yes, there is no calculated avg time on o-h so I did not take exact avg time 
its round figure. 

May be 14 days  is too less to take decision to mark them slow but i think 
their avg time since 3 months will be same. should we consider 3 month time 
period for those ?

As per avg time, I have voted (currently based on 14 days avg) on ethercalc 
which all test to mark as slow. I taken the criteria of >120 sec avg time.  
Once we have more and more people votes there we can mark them slow. 

[3] https://ethercalc.openstack.org/dorupfz6s9qt

-gmann

 > 
 > [1] http://status.openstack.org/elastic-recheck/data/integrated_gate.html
 > [2] https://bugs.launchpad.net/tempest/+bug/1783405
 > 
 > -- 
 > 
 > Thanks,
 > 
 > Matt
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] [tempest] [patrole] Service client duplication between Tempest and Tempest plugins

2018-07-24 Thread Ghanshyam Mann
  On Wed, 25 Jul 2018 10:27:26 +0900 MONTEIRO, FELIPE C  
wrote  
 > Please see comments inline.  
 >  
 > >   On Tue, 24 Jul 2018 04:22:47 +0900 MONTEIRO, FELIPE C 
 > >  wrote  
 > >  >   Hi, 
 > >  > 
 > >  >  ** Intention ** 
 > >  >  Intention is to expand Patrole testing to some service clients that 
 > > already 
 > > exist in some Tempest plugins, for core services only. 
 > >  > 
 > >  >  ** Background ** 
 > >  >  Digging through Neutron testing, it seems like there is currently a 
 > > lot of 
 > > test duplication between neutron-tempest-plugin and Tempest [1]. Under 
 > > some circumstances it seems OK to have redundant testing/parallel  
 > > testing: 
 > > “Having potential duplication between testing is not a big deal especially 
 > > compared to the alternative of removing something which is actually 
 > > providing value and is actively catching bugs, or blocking incorrect 
 > > patches 
 > > from landing” [2]. 
 > >  
 > > We really need to minimize the test duplication. If there is test in 
 > > tempest 
 > > plugin for core services then, we do not need to add those in Tempest repo 
 > > until it is interop requirement. This is for new tests so we can avoid the 
 > > duplication in future. I will write this in Tempest reviewer guide. 
 > > For existing duplicate tests, as per bug you mentioned[1] we need to 
 > > cleanup 
 > > the duplicate tests and they should live in their respective repo(either 
 > > in 
 > > neutron tempest plugin or tempest) which is categorized in etherpad[7]. 
 > > How 
 > > many tests are duplicated now? I will plan this as one of cleanup working 
 > > item in stein. 
 > >  
 > >  > 
 > >  >  This leads me to the following question: If API test duplication is 
 > > OK, what 
 > > about service client duplication? Patches like [3] and [4]  promote 
 > > service 
 > > client duplication with neutron-tempest-plugin. As far as I can tell, 
 > > Neutron 
 > > builds out some of its service clients dynamically here: [5]. Which 
 > > includes 
 > > segments service client (proposed as an addition to tempest.lib in [4]) 
 > > here: 
 > > [6]. 
 > >  
 > > Yeah, they are very dynamic in neutron plugins and its because of old 
 > > legacy 
 > > code. That is because when neutron tempest plugin was forked from 
 > > Tempest as it is. These dynamic generation of service clients are really 
 > > hard 
 > > to debug and maintain. This can easily lead to backward incompatible 
 > > changes if we make those service clients stable interface to consume 
 > > outside. For those reason, we did fixed those in Tempest 3 years back [8] 
 > > and 
 > > made them  static and consistent service client methods like other 
 > > services 
 > > clients. 
 > >  
 > >  > 
 > >  >  This leads to a situation where if we want to offer RBAC testing for 
 > > these 
 > > APIs (to validate their policy enforcement), we can’t really do so without 
 > > adding the service client to Tempest, unless  we rely on the 
 > > neutron-tempest- 
 > > plugin (for example) in Patrole’s .zuul.yaml. 
 > >  > 
 > >  >  ** Path Forward ** 
 > >  >  Option #1: For the core services, most service clients should live in 
 > > tempest.lib for standardization/governance around documentation and 
 > > stability for those clients. Service client duplication  should try to be 
 > > minimized as much as possible. API testing related to some service 
 > > clients, 
 > > though, should remain in the Tempest plugins. 
 > >  > 
 > >  >  Option #2: Proceed with service client duplication, either by adding 
 > > the 
 > > service client to Tempest (or as yet another alternative, Patrole). This 
 > > leads 
 > > to maintenance overhead: have to maintain  service clients in the plugins 
 > > and 
 > > Tempest itself. 
 > >  > 
 > >  >  Option #3: Don’t offer RBAC testing in Patrole plugin for those APIs. 
 > >  
 > > We need to share the service clients among Tempest plugins. And each 
 > > service clients which are being shared across repo has to be declared as 
 > > stable interface like Tempest does. Idea here is service clients will live 
 > > in the 
 > > repo where their original tests were added or going to be added. For 
 > > example in case of neutron tempest plugin, if rbac-policy API tests are in 
 > > neutron then its service client needs to be owned by 
 > > neutron-tempest-plugin. 
 > > further rbac-policy service client can be consumed by Patrole. It is same 
 > > case 
 > > for congress tempest plugin, where they consume mistral service client. I 
 > > recommended the same in that thread also of using service client from 
 > > Mistral and Mistral make the service client as stable interface [9]. Which 
 > > is 
 > > being done in congress[10] 
 > >  
 > > Here are the general recommendation for Tempest Plugins for service 
 > > clients 
 > > : 
 > > - Tempest Plugins should make their service clients as stable interface 
 > > which 
 > > gives 2 advantage: 
 >  
 > In this case we should also 

Re: [openstack-dev] [qa] [tempest] [patrole] Service client duplication between Tempest and Tempest plugins

2018-07-23 Thread Ghanshyam Mann
  On Tue, 24 Jul 2018 04:22:47 +0900 MONTEIRO, FELIPE C  
wrote  
 >   Hi,
 >   
 >  ** Intention **
 >  Intention is to expand Patrole testing to some service clients that already 
 > exist in some Tempest plugins, for core services only.
 >   
 >  ** Background **
 >  Digging through Neutron testing, it seems like there is currently a lot of 
 > test duplication between neutron-tempest-plugin and Tempest [1]. Under some 
 > circumstances it seems OK to have redundant testing/parallel  testing: 
 > “Having potential duplication between testing is not a big deal especially 
 > compared to the alternative of removing something which is actually 
 > providing value and is actively catching bugs, or blocking incorrect patches 
 > from landing” [2].

We really need to minimize the test duplication. If there is test in tempest 
plugin for core services then, we do not need to add those in Tempest repo  
until it is interop requirement. This is for new tests so we can avoid the 
duplication in future. I will write this in Tempest reviewer guide.
For existing duplicate tests, as per bug you mentioned[1] we need to cleanup 
the duplicate tests and they should live in their respective repo(either in 
neutron tempest plugin or tempest) which is categorized in etherpad[7]. How 
many tests are duplicated now? I will plan this as one of cleanup working item 
in stein. 

 >   
 >  This leads me to the following question: If API test duplication is OK, 
 > what about service client duplication? Patches like [3] and [4]  promote 
 > service client duplication with neutron-tempest-plugin. As far as I can 
 > tell, Neutron builds out some of its service clients dynamically here: [5]. 
 > Which includes segments service client (proposed as an addition to 
 > tempest.lib in [4]) here: [6].

Yeah, they are very dynamic in neutron plugins and its because of old legacy 
code. That is because when neutron tempest plugin was forked from Tempest as it 
is. These dynamic generation of service clients are really hard to debug and 
maintain. This can easily lead to backward incompatible changes if we make 
those service clients stable interface to consume outside. For those reason, we 
did fixed those in Tempest 3 years back [8] and made them  static and 
consistent service client methods like other services clients. 

 >   
 >  This leads to a situation where if we want to offer RBAC testing for these 
 > APIs (to validate their policy enforcement), we can’t really do so without 
 > adding the service client to Tempest, unless  we rely on the 
 > neutron-tempest-plugin (for example) in Patrole’s .zuul.yaml.
 >   
 >  ** Path Forward **
 >  Option #1: For the core services, most service clients should live in 
 > tempest.lib for standardization/governance around documentation and 
 > stability for those clients. Service client duplication  should try to be 
 > minimized as much as possible. API testing related to some service clients, 
 > though, should remain in the Tempest plugins.
 >   
 >  Option #2: Proceed with service client duplication, either by adding the 
 > service client to Tempest (or as yet another alternative, Patrole). This 
 > leads to maintenance overhead: have to maintain  service clients in the 
 > plugins and Tempest itself.
 >   
 >  Option #3: Don’t offer RBAC testing in Patrole plugin for those APIs.

We need to share the service clients among Tempest plugins. And each service 
clients which are being shared across repo has to be declared as stable 
interface like Tempest does. Idea here is service clients will live in the repo 
where their original tests were added or going to be added. For example in case 
of neutron tempest plugin, if rbac-policy API tests are in neutron then its 
service client needs to be owned by neutron-tempest-plugin. further rbac-policy 
service client can be consumed by Patrole. It is same case for congress tempest 
plugin, where they consume mistral service client. I recommended the same in 
that thread also of using service client from Mistral and Mistral make the 
service client as stable interface [9]. Which is being done in congress[10]

Here are the general recommendation for Tempest Plugins for service clients :
- Tempest Plugins should make their service clients as stable interface which 
gives 2 advantage:
  1. By this you make sure that you are not allowing to change the API calling 
interface(service clietns) which indirectly means you are not allowing to 
change the APIs. Makes your tempest plugin testing more reliable.

   2. Your service clients can be used in other Tempest plugins to avoid 
duplicate code/interface. If any other plugins use you service clients means, 
they also test your project so it is good to help them by providing the 
required interface as stable.

Initial idea of owning the service clients in their respective plugins was to 
share them among plugins for integrated testing of more then one openstack 
service.

- Usage of service 

Re: [openstack-dev] [nova][cinder][neutron][qa] Should we add a tempest-slow job?

2018-07-18 Thread Ghanshyam Mann
 > On Sun, May 13, 2018 at 1:20 PM, Ghanshyam Mann  
 > wrote: 
 > > On Fri, May 11, 2018 at 10:45 PM, Matt Riedemann  
 > > wrote: 
 > >> The tempest-full job used to run API and scenario tests concurrently, and 
 > >> if 
 > >> you go back far enough I think it also ran slow tests. 
 > >> 
 > >> Sometime in the last year or so, the full job was changed to run the 
 > >> scenario tests in serial and exclude the slow tests altogether. So the 
 > >> API 
 > >> tests run concurrently first, and then the scenario tests run in serial. 
 > >> During that change, some other tests were identified as 'slow' and marked 
 > >> as 
 > >> such, meaning they don't get run in the normal tempest-full job. 
 > >> 
 > >> There are some valuable scenario tests marked as slow, however, like the 
 > >> only encrypted volume testing we have in tempest is marked slow so it 
 > >> doesn't get run on every change for at least nova. 
 > > 
 > > Yes, basically slow tests were selected based on 
 > > https://ethercalc.openstack.org/nu56u2wrfb2b and there were frequent 
 > > gate failure for heavy tests mainly from ssh checks so we tried to 
 > > mark more tests as slow. 
 > > I agree that some of them are not really slow at least in today situation. 
 > > 
 > >> 
 > >> There is only one job that can be run against nova changes which runs the 
 > >> slow tests but it's in the experimental queue so people forget to run it. 
 > > 
 > > Tempest job 
 > > "legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend" 
 > > run those slow tests including migration and LVM  multibackend tests. 
 > > This job runs on tempest check pipeline and experimental (as you 
 > > mentioned) on nova and cinder [3]. We marked this as n-v to check its 
 > > stability and now it is good to go as voting on tempest. 
 > > 
 > >> 
 > >> As a test, I've proposed a nova-slow job [1] which only runs the slow 
 > >> tests 
 > >> and only the compute API and scenario tests. Since there currently no 
 > >> compute API tests marked as slow, it's really just running slow scenario 
 > >> tests. Results show it runs 37 tests in about 37 minutes [2]. The overall 
 > >> job runtime was 1 hour and 9 minutes, which is on average less than the 
 > >> tempest-full job. The nova-slow job is also running scenarios that nova 
 > >> patches don't actually care about, like the neutron IPv6 scenario tests. 
 > >> 
 > >> My question is, should we make this a generic tempest-slow job which can 
 > >> be 
 > >> run either in the integrated-gate or at least in nova/neutron/cinder 
 > >> consistently (I'm not sure if there are slow tests for just keystone or 
 > >> glance)? I don't know if the other projects already have something like 
 > >> this 
 > >> that they gate on. If so, a nova-specific job for nova changes is fine 
 > >> for 
 > >> me. 
 > > 
 > > +1 on idea. As of now slow marked tests are from nova, cinder and 
 > > neutron scenario tests and 2 API swift tests only [4]. I agree that 
 > > making a generic job in tempest is better for maintainability. We can 
 > > use existing job for that with below modification- 
 > > -  We can migrate 
 > > "legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend" job 
 > > zuulv3 in tempest repo 
 > > -  We can see if we can move migration tests out of it and use 
 > > "nova-live-migration" job (in tempest check pipeline ) which is much 
 > > better in live migration env setup and controlled by nova. 
 > > -  then it can be name something like 
 > > "tempest-scenario-multinode-lvm-multibackend". 
 > > -  run this job in nova, cinder, neutron check pipeline instead of 
 > > experimental. 
 >  
 > Like this - 
 > https://review.openstack.org/#/q/status:open+project:openstack/tempest+branch:master+topic:scenario-tests-job
 >  
 >  
 > That makes scenario job as generic with running all scenario tests 
 > including slow tests with concurrency 2. I made few cleanup and moved 
 > live migration tests out of it which is being run by 
 > 'nova-live-migration' job. Last patch making this job as voting on 
 > tempest side. 
 >  
 > If looks good, we can use this to run on project side pipeline as voting. 

Update on this thread:
 Old Scenario  job 
"legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend" has been 
migrated to Tempest as new job named "tempest-scenario-all" job[1] 

Changes from old job to ne

[openstack-dev] [nova]API update week 12-18

2018-07-18 Thread Ghanshyam Mann
Hi All, 

Please find the Nova API highlights of this week. 

Weekly Office Hour: 
=== 

What we discussed this week: 
- Discussion on priority BP and remaining reviews on those. 
- picked up 3 in-progress bug's patches and reviewed. 

Planned Features : 
== 
Below are the API related features for Rocky cycle. Nova API Sub team will 
start reviewing those to give their regular feedback. If anythings missing 
there feel free to add those in etherpad- 
https://etherpad.openstack.org/p/rocky-nova-priorities-tracking 

1. Servers Ips non-unique network names : 
- 
https://blueprints.launchpad.net/nova/+spec/servers-ips-non-unique-network-names
 
- Spec Merged 
- 
https://review.openstack.org/#/q/topic:bp/servers-ips-non-unique-network-names+(status:open+OR+status:merged)
 
- Weekly Progress: I sent mail to author but no response yet.  I will push the 
code update during next week early. 

2. Abort live migration in queued state: 
- 
https://blueprints.launchpad.net/nova/+spec/abort-live-migration-in-queued-status
 
- 
https://review.openstack.org/#/q/topic:bp/abort-live-migration-in-queued-status+(status:open+OR+status:merged)
 
- Weekly Progress: API patch is in gate to merge. nova client patch is 
remaining to mark this complete (Kevin mentioned he is working on that). 

3. Complex anti-affinity policies: 
- https://blueprints.launchpad.net/nova/+spec/complex-anti-affinity-policies 
- 
https://review.openstack.org/#/q/topic:bp/complex-anti-affinity-policies+(status:open+OR+status:merged)
 
- Weekly Progress: API patch is merged. nova client and 1 follow up patch is 
remaining. 

4. Volume multiattach enhancements: 
- https://blueprints.launchpad.net/nova/+spec/volume-multiattach-enhancements 
- 
https://review.openstack.org/#/q/topic:bp/volume-multiattach-enhancements+(status:open+OR+status:merged)
 
- Weekly Progress: No progress. 

5. API Extensions merge work 
- https://blueprints.launchpad.net/nova/+spec/api-extensions-merge-rocky 
- 
https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:bp/api-extensions-merge-rocky
 
- Weekly Progress: I pushed patches for part-2 (server_create merge). I will 
work on pushing last part-3 max by early next week. 

6. Handling a down cell 
- https://blueprints.launchpad.net/nova/+spec/handling-down-cell 
- 
https://review.openstack.org/#/q/topic:bp/handling-down-cell+(status:open+OR+status:merged)
- Weekly Progress: Code is up and matt has reviewed few patches. API subteam 
will target this BP as other BP work are almost merged. 

Bugs: 
 
Did review on in-progress bugs's patches. 

This week Bug Progress: 
Critical: 0->0 
High importance: 3->3 
By Status: 
New: 0->0 
Confirmed/Triage: 31-> 29 
In-progress: 36->36 
Incomplete: 4->4 
= 
Total: 70->69

NOTE- There might be some bug which are not tagged as 'api' or 'api-ref', those 
are not in above list. Tag such bugs so that we can keep our eyes. 

Ref: https://etherpad.openstack.org/p/nova-api-weekly-bug-report 

-gmann 





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa][ptg] Stein PTG Planning for QA

2018-07-11 Thread Ghanshyam Mann
Hi All,

As we are close to Stein PTG Denver, I have prepared the etherpad[1] to collect 
the PTG topic ideas for QA. Please start adding your item/topic you want to 
discuss in PTG or comment on proposed topics.  Even you are not making to PTG 
physically, still add your topic which you want us to discuss or progress on.  

[1] https://etherpad.openstack.org/p/qa-stein-ptg

-gmann





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova]API update week 5-11

2018-07-11 Thread Ghanshyam Mann
Hi All, 

Please find the Nova API highlights of this week. 

Weekly Office Hour: 
=== 
We had more attendees in this week office hours.  

What we discussed this week: 
- Discussion on API related BP. Discussion points are embedded inline with BP 
weekly progress in next section. 
- Triage 1 new bug and Alex reviewed one in-progress 

Planned Features : 
== 
Below are the API related features for Rocky cycle. Nova API Sub team will 
start reviewing those to give their regular feedback. If anythings missing 
there feel free to add those in etherpad- 
https://etherpad.openstack.org/p/rocky-nova-priorities-tracking 

1. Servers Ips non-unique network names : 
- 
https://blueprints.launchpad.net/nova/+spec/servers-ips-non-unique-network-names
 
- Spec Merged
- 
https://review.openstack.org/#/q/topic:bp/servers-ips-non-unique-network-names+(status:open+OR+status:merged)
 
- Weekly Progress: Spec is merged. I am in contact with author about code 
update (sent email last night). If no response till this week, i will push the 
code update for this BP.  

2. Abort live migration in queued state: 
- 
https://blueprints.launchpad.net/nova/+spec/abort-live-migration-in-queued-status
 
- 
https://review.openstack.org/#/q/topic:bp/abort-live-migration-in-queued-status+(status:open+OR+status:merged)
 
- Weekly Progress: Review is going and it is in nova runway this week. In API 
office hour, we discussed about doing the compute service version checks on 
compute.api.py side than on rpc side. Dan has point of doing it on rpc side 
where migration status can changed to running. We decided to further discussed 
it on patch. 

3. Complex anti-affinity policies: 
- https://blueprints.launchpad.net/nova/+spec/complex-anti-affinity-policies 
- 
https://review.openstack.org/#/q/topic:bp/complex-anti-affinity-policies+(status:open+OR+status:merged)
 
- Weekly Progress: Good review progress. In API office hour, we discussed on 2 
points-
   1. whether request also need to have flat format like response. IMO we 
need to have flat in both request and response. Yikun need more opinion on 
that. 

   2. naming fields to policy_* as we are moving these new fields in flat 
format. I like to have policy_* for clear understanding of attributes by their 
name. This is not concluded 
 and alex will give feedback on patch.
   Discussion is on patch for consensus on naming things. 

4. Volume multiattach enhancements: 
- https://blueprints.launchpad.net/nova/+spec/volume-multiattach-enhancements 
- 
https://review.openstack.org/#/q/topic:bp/volume-multiattach-enhancements+(status:open+OR+status:merged)
 
- Weekly Progress: mriedem mentioned in last week status mail that he will 
continue work on this. 

5. API Extensions merge work 
- https://blueprints.launchpad.net/nova/+spec/api-extensions-merge-rocky 
- 
https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:bp/api-extensions-merge-rocky
 
- Weekly Progress: I did not get chance to push more patches on this. I will 
target this one before next office hour. 

6. Handling a down cell
 - https://blueprints.launchpad.net/nova/+spec/handling-down-cell
 -  Spec mriedem  mentioned in previous week ML is merged - 
https://review.openstack.org/#/c/557369/

Bugs: 
 
Triage 1 new bug and Alex reviewed one in-progress. I did not do my home work 
of doing review on in-progress patches (i will accommodate that in next week) 

This week Bug Progress: 
Critical: 0->0 
High importance: 2->3 
By Status:
New:  1->0
Confirmed/Triage: 30-> 31
In-progress: 36->36
Incomplete: 4->4
=
Total: 70->71

NOTE- There might be some bug which are not tagged as 'api' or 'api-ref', those 
are not in above list. Tag such bugs so that we can keep our eyes. 

Ref: https://etherpad.openstack.org/p/nova-api-weekly-bug-report 

-gmann 





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [nova][api] Novaclient redirect endpoint https into http

2018-07-05 Thread Ghanshyam Mann



  On Fri, 06 Jul 2018 11:30:15 +0900 Alex Xu  wrote  
 > 
 > 
 > 2018-07-06 10:03 GMT+08:00 Alex Xu :
 > 
 > 
 > 2018-07-06 2:55 GMT+08:00 melanie witt :
 > +openstack-dev@
 >  
 >  On Wed, 4 Jul 2018 14:50:26 +, Bogdan Katynski wrote:
 >   But, I can not use nova command, endpoint nova have been redirected from 
 > https to http. Here:http://prntscr.com/k2e8s6  (command: nova –insecure 
 > service list)
 >   First of all, it seems that the nova client is hitting /v2.1 instead of 
 > /v2.1/ URI and this seems to be triggering the redirect.
 >  
 >  Since openstack CLI works, I presume it must be using the correct URL and 
 > hence it’s not getting redirected.
 >  
 > And this is error log: Unable to establish connection 
 > tohttp://192.168.30.70:8774/v2.1/: ('Connection aborted.', 
 > BadStatusLine("''",))
 >
 >   Looks to me that nova-api does a redirect to an absolute URL. I suspect 
 > SSL is terminated on the HAProxy and nova-api itself is configured without 
 > SSL so it redirects to an http URL.
 >  
 >  In my opinion, nova would be more load-balancer friendly if it used a 
 > relative URI in the redirect but that’s outside of the scope of this 
 > question and since I don’t know the context behind choosing the absolute 
 > URL, I could be wrong on that.
 >   
 >  Thanks for mentioning this. We do have a bug open in python-novaclient 
 > around a similar issue [1]. I've added comments based on this thread and 
 > will consult with the API subteam to see if there's something we can do 
 > about this in nova-api.

We can support both URL for version API in that case ( /v2.1 and /v2.1/ ). 
Redirect from relative to obsolete can be changed to  map '' to 'GET': 
[version_controller, 'show'] route, something like [1]. 

[1] https://review.openstack.org/#/c/580544/

-gmann

 >  
 > 
 > Emm...check with the RFC, it said the value of Location header is absolute 
 > URL https://tools.ietf.org/html/rfc2616.html#section-14.30
 > Sorry, correct that. the RFC7231 updated that. The relativeURL is ok. 
 > https://tools.ietf.org/html/rfc7231#section-7.1.2   -melanie
 >  
 >  [1] https://bugs.launchpad.net/python-novaclient/+bug/1776928
 >  
 >  
 >  
 >  
 >  __
 >  OpenStack Development Mailing List (not for usage questions)
 >  Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 >  http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 >  
 >  
 >  __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova]API update week 28-4

2018-07-04 Thread Ghanshyam Mann
Hi All,

Please find the Nova API highlights of this week. 

Weekly Office Hour:
===
We have re-started the Nova API discussion in office hour. I have updated the 
wiki page for more information about office hours: 
https://wiki.openstack.org/wiki/Meetings/NovaAPI 

What we discussed this week:
- This was the first office hours after long time.
- Collected  the API related BPs on etherpad (rocky-nova-priorities-tracking) 
for review.
- Created the weekly bug report etherpad and we will track down the number 
there. 
- Home work for API subteam to at least review 3 in-progress bug patches. 
- From next week we will do some online bug triage/review or discussion around 
ongoing BP.

Planned Features :
==
Below are the API related features for Rocky cycle. Nova API Sub team will 
start reviewing those to give their regular feedback. If anythings missing 
there feel free to add those in etherpad-  
https://etherpad.openstack.org/p/rocky-nova-priorities-tracking 

1. Servers Ips non-unique network names :
 - 
https://blueprints.launchpad.net/nova/+spec/servers-ips-non-unique-network-names
 - Spec Update need another +2 - https://review.openstack.org/#/c/558125/
 - 
https://review.openstack.org/#/q/topic:bp/servers-ips-non-unique-network-names+(status:open+OR+status:merged)
  
 - Weekly Progress: On Hold. Waiting for spec update to merge first. 

2. Abort live migration in queued state:
- 
https://blueprints.launchpad.net/nova/+spec/abort-live-migration-in-queued-status
- 
https://review.openstack.org/#/q/topic:bp/abort-live-migration-in-queued-status+(status:open+OR+status:merged)
   
- Weekly Progress: Code is up for review. No Review last week. 

3. Complex anti-affinity policies:
- https://blueprints.launchpad.net/nova/+spec/complex-anti-affinity-policies
- 
https://review.openstack.org/#/q/topic:bp/complex-anti-affinity-policies+(status:open+OR+status:merged)
  
- Weekly Progress: Code is up for review. Few reviews done . 

4. Volume multiattach enhancements:
- 
https://blueprints.launchpad.net/nova/+spec/volume-multiattach-enhancements
- 
https://review.openstack.org/#/q/topic:bp/volume-multiattach-enhancements+(status:open+OR+status:merged)
  
- Weekly Progress: Waiting to hear from mriedem about his WIP on base patch 
- https://review.openstack.org/#/c/569649/3

5. API Extensions merge work
- https://blueprints.launchpad.net/nova/+spec/api-extensions-merge-rocky
- 
https://review.openstack.org/#/q/project:openstack/nova+branch:master+topic:bp/api-extensions-merge-rocky
 
- Weekly Progress: Good progress. 1/3 part is merged. 

Bugs:

We discussed in office hour to start reviewing the in-progress bugs and 
minimize the number.  From next week, I will show the weekly progress on the 
bug numbers.
 
Current Bugs Status:
 Critical bug 0
 High importance bugs  2
Status:
New bugs 0
Confirmed/Triage   30
In-progress bugs 36
Incomplete:4
  =
   Total:70

NOTE- There might be some bug which are not tagged as 'api' or 'api-ref', those 
are not in above list. Tag such bugs so that we can keep our eyes. 
 
Ref:  https://etherpad.openstack.org/p/nova-api-weekly-bug-report

-gmann






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins

2018-06-28 Thread Ghanshyam Mann



  On Fri, 29 Jun 2018 00:05:09 +0900 Dmitry Tantsur  
wrote  
 > On 06/27/2018 03:17 AM, Ghanshyam Mann wrote:
 > > 
 > > 
 > > 
 > >    On Tue, 26 Jun 2018 23:12:30 +0900 Doug Hellmann 
 > >  wrote 
 > >   > Excerpts from Matthew Treinish's message of 2018-06-26 09:52:09 -0400:
 > >   > > On Tue, Jun 26, 2018 at 08:53:21AM -0400, Doug Hellmann wrote:
 > >   > > > Excerpts from Andrea Frittoli's message of 2018-06-26 13:35:11 
 > > +0100:
 > >   > > > > On Tue, 26 Jun 2018, 1:08 pm Thierry Carrez, 
 > >  wrote:
 > >   > > > >
 > >   > > > > > Dmitry Tantsur wrote:
 > >   > > > > > > [...]
 > >   > > > > > > My suggestion: tempest has to be compatible with all 
 > > supported releases
 > >   > > > > > > (of both services and plugins) OR be branched.
 > >   > > > > > > [...]
 > >   > > > > > I tend to agree with Dmitry... We have a model for things that 
 > > need
 > >   > > > > > release alignment, and that's the cycle-bound series. The 
 > > reason tempest
 > >   > > > > > is branchless was because there was no compatibility issue. If 
 > > the split
 > >   > > > > > of tempest plugins introduces a potential incompatibility, 
 > > then I would
 > >   > > > > > prefer aligning tempest to the existing model rather than 
 > > introduce a
 > >   > > > > > parallel tempest-specific cycle just so that tempest can stay
 > >   > > > > > release-independent...
 > >   > > > > >
 > >   > > > > > I seem to remember there were drawbacks in branching tempest, 
 > > though...
 > >   > > > > > Can someone with functioning memory brain cells summarize them 
 > > again ?
 > >   > > > > >
 > >   > > > >
 > >   > > > >
 > >   > > > > Branchless Tempest enforces api stability across branches.
 > >   > > >
 > >   > > > I'm sorry, but I'm having a hard time taking this statement 
 > > seriously
 > >   > > > when the current source of tension is that the Tempest API itself
 > >   > > > is breaking for its plugins.
 > >   > > >
 > >   > > > Maybe rather than talking about how to release compatible things
 > >   > > > together, we should go back and talk about why Tempest's API is 
 > > changing
 > >   > > > in a way that can't be made backwards-compatible. Can you give 
 > > some more
 > >   > > > detail about that?
 > >   > > >
 > >   > >
 > >   > > Well it's not, if it did that would violate all the stability 
 > > guarantees
 > >   > > provided by Tempest's library and plugin interface. I've not ever 
 > > heard of
 > >   > > these kind of backwards incompatibilities in those interfaces and we 
 > > go to
 > >   > > all effort to make sure we don't break them. Where did the idea that
 > >   > > backwards incompatible changes where being introduced come from?
 > >   >
 > >   > In his original post, gmann said, "There might be some changes in
 > >   > Tempest which might not work with older version of Tempest Plugins."
 > >   > I was surprised to hear that, but I'm not sure how else to interpret
 > >   > that statement.
 > > 
 > > I did not mean to say that Tempest will introduce the changes in backward 
 > > incompatible way which can break plugins. That cannot happen as all 
 > > plugins and tempest are branchless and they are being tested with master 
 > > Tempest so if we change anything backward incompatible then it break the 
 > > plugins gate. Even we have to remove any deprecated interfaces from 
 > > Tempest, we fix all plugins first like - 
 > > https://review.openstack.org/#/q/topic:remove-support-of-cinder-v1-api+(status:open+OR+status:merged)
 > > 
 > > What I mean to say here is that adding new or removing deprecated 
 > > interface in Tempest might not work with all released version or 
 > > unreleased Plugins. That point is from point of view of using Tempest and 
 > > Plugins in production cloud testing not gate(where we keep the 
 > > compatibility). Production Cloud user use Tempest cycle based version. 
 > > Pike based Cloud will be tested by Tempest 17.0.0 not latest version 
 > > (though latest version might work).
 > > 
 &g

Re: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins

2018-06-28 Thread Ghanshyam Mann



  On Thu, 28 Jun 2018 04:08:35 +0900 Sean McGinnis  
wrote  
 > > 
 > > There is no issue of backward incompatibility from Tempest and on Gate. 
 > > GATE
 > > is always good as it is going with mater version or minimum supported 
 > > version
 > > in plugins as you mentioned. We take care of all these things you mentioned
 > > which is our main goal also. 
 > > 
 > > But If we think from Cloud tester perspective where they use older version 
 > > of
 > > tempest for particular OpenStack release but there is no corresponding
 > > tag/version from plugins to use them for that OpenStack release. 
 > > 
 > > Idea is here to have a tag from Plugins also like Tempest does currently 
 > > for
 > > each OpenStack release so that user can pickup those tag and test their
 > > Complete Cloud. 
 > > 
 > 
 > Thanks for the further explanation Ghanshyam. So it's not so much that newer
 > versions of tempest may break the current repo plugins, it's more to the fact
 > that any random plugin that gets pulled in has no way of knowing if it can 
 > take
 > advantage of a potentially older version of tempest that had not yet 
 > introduced
 > something the plugin is relying on.
 > 
 > I think it makes sense for the tempest plugins to be following the
 > cycle-with-intermediary model. This would allow plugins to be released at any
 > point during a given cycle and would then have a way to match up a "release" 
 > of
 > the plugin.
 > 
 > Release repo deliverable placeholders are being proposed for all the tempest
 > plugin repos we could find. Thanks to Doug for pulling this all together:
 > 
 > https://review.openstack.org/#/c/578141/
 > 
 > Please comment there if you see any issues.

Thanks. That's correct understanding and goal of this thread which is from 
production cloud testing point of view not just *gate*. 
cycle-with-intermediary model fulfill the user's requirement which they asked 
in summit. 

Doug patch lgtm.

-gmann

 > 
 > Sean
 > 
 > __
 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Nova API Office Hour

2018-06-26 Thread Ghanshyam Mann
Hi All,

From today, we will be hosting the office hour for Nova API discussions which 
will cover the Nova API priority and API Bug triage things. I have updated the 
information about agenda and time in wiki page [1].

All are welcome to join. We will continue this on every Wedneday 06.00 UTC

[1] https://wiki.openstack.org/wiki/Meetings/NovaAPI 


-gmann



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins

2018-06-26 Thread Ghanshyam Mann



  On Tue, 26 Jun 2018 19:32:59 +0900 Mehdi Abaakouk  
wrote  
 > Hi, 
 >  
 > I have never understood the branchless tempest thing. Making Tempest 
 > release is a great news for me. 
 >  
 > But about plugins... Tempest already provides a API for plugins. If you 
 > are going to break this API, why not using stable branches and 
 > deprecation process like any other software ? 
 >  
 > If you do that, plugin will be informed that Tempest will soon do a 
 > breaking change. Their can update their plugin code and raise the 
 > minimal tempest version required to work. 
 >  
 > Their can do that when they have times, and not because Tempest want to 
 > release a version soon. 
 >  
 > Also the stable branch/deprecation process is well known by the 
 > whole community. 

There is no issue of backward incompatibility from Tempest and on Gate. GATE is 
always good as it is going with mater version or minimum supported version in 
plugins as you mentioned. We take care of all these things you mentioned which 
is our main goal also. 

But If we think from Cloud tester perspective where they use older version of 
tempest for particular OpenStack release but there is no corresponding 
tag/version from plugins to use them for that OpenStack release. 

Idea is here to have a tag from Plugins also like Tempest does currently for 
each OpenStack release so that user can pickup those tag and test their 
Complete Cloud. 


-gmann
 >  
 > And this will also allow them to release a version when their want. 
 >  
 > So I support making release of Tempest and Plugins, but do not support 
 > a coordinated release. 
 >  
 > Regards, 
 >  
 > On Tue, Jun 26, 2018 at 06:18:52PM +0900, Ghanshyam Mann wrote: 
 > >Hello Everyone, 
 > > 
 > >In Queens cycle,  community goal to split the Tempest Plugin has been 
 > >completed [1] and i think almost all the projects have separate repo for 
 > >tempest plugin [2]. Which means each tempest plugins are being separated 
 > >from their project release model.  Few projects have started the 
 > >independent release model for their plugins like kuryr-tempest-plugin, 
 > >ironic-tempest-plugin etc [3].  I think neutron-tempest-plugin also 
 > >planning as chatted with amotoki. 
 > > 
 > >There might be some changes in Tempest which might not work with older 
 > >version of Tempest Plugins.  For example, If I am testing any production 
 > >cloud which has Nova, Neutron, Cinder, Keystone , Aodh, Congress etc  i 
 > >will be using Tempest and Aodh's , Congress's Tempest plugins. With 
 > >Independent release model of each Tempest Plugins, there might be chance 
 > >that the Aodh's or Congress's Tempest plugin versions are not compatible 
 > >with latest/known Tempest versions. It will become hard to find the 
 > >compatible tag/release of Tempest and Tempest Plugins or in some cases i 
 > >might need to patch up the things. 
 > > 
 > >During QA feedback sessions at Vancouver Summit, there was feedback to 
 > >coordinating the release of all Tempest plugins and Tempest [4] (also 
 > >amotoki talked to me on this as neutron-tempest-plugin is planning their 
 > >first release). Idea is to release/tag all the Tempest plugins and Tempest 
 > >together so that specific release/tag can be identified as compatible 
 > >version of all the Plugins and Tempest for testing the complete stack. That 
 > >way user can get to know what version of Tempest Plugins is compatible with 
 > >what version of Tempest. 
 > > 
 > >For above use case, we need some coordinated release model among Tempest 
 > >and all the Tempest Plugins. There should be single release of all Tempest 
 > >Plugins with well defined tag whenever any Tempest release is happening.  
 > >For Example, Tempest version 19.0.0 is to mark the "support of the Rocky 
 > >release". When releasing the Tempest 19.0, we will release all the Tempest 
 > >plugins also to tag the compatibility of plugins with Tempest for "support 
 > >of the Rocky release". 
 > > 
 > >One way to make this coordinated release (just a initial thought): 
 > >1. Release Each Tempest Plugins whenever there is any major version release 
 > >of Tempest (like marking the support of OpenStack release in Tempest, EOL 
 > >of OpenStack release in Tempest) 
 > >1.1 Each plugin or Tempest can do their intermediate release of minor 
 > > version change which are in backward compatible way. 
 > >1.2 This coordinated Release can be started from latest Tempest Version 
 > > for simple reading.  Like if we start this coordinated release from 
 > > Tempest version 19.0.0 then, 
 > >each 

Re: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins

2018-06-26 Thread Ghanshyam Mann



  On Tue, 26 Jun 2018 19:12:33 +0900 Dmitry Tantsur  
wrote  
 > On 06/26/2018 11:57 AM, Ghanshyam Mann wrote:
 > > 
 > > 
 > > 
 > >    On Tue, 26 Jun 2018 18:37:42 +0900 Dmitry Tantsur 
 > >  wrote 
 > >   > On 06/26/2018 11:18 AM, Ghanshyam Mann wrote:
 > >   > > Hello Everyone,
 > >   > >
 > >   > > In Queens cycle,  community goal to split the Tempest Plugin has 
 > > been completed [1] and i think almost all the projects have separate repo 
 > > for tempest plugin [2]. Which means each tempest plugins are being 
 > > separated from their project release model.  Few projects have started the 
 > > independent release model for their plugins like kuryr-tempest-plugin, 
 > > ironic-tempest-plugin etc [3].  I think neutron-tempest-plugin also 
 > > planning as chatted with amotoki.
 > >   > >
 > >   > > There might be some changes in Tempest which might not work with 
 > > older version of Tempest Plugins.  For example, If I am testing any 
 > > production cloud which has Nova, Neutron, Cinder, Keystone , Aodh, 
 > > Congress etc  i will be using Tempest and Aodh's , Congress's Tempest 
 > > plugins. With Independent release model of each Tempest Plugins, there 
 > > might be chance that the Aodh's or Congress's Tempest plugin versions are 
 > > not compatible with latest/known Tempest versions. It will become hard to 
 > > find the compatible tag/release of Tempest and Tempest Plugins or in some 
 > > cases i might need to patch up the things.
 > >   >
 > >   > FWIW this is solved by stable branches for all other projects. If we 
 > > cannot keep
 > >   > Tempest compatible with all supported branches, we should back off our 
 > > decision
 > >   > to make it branchless. The very nature of being branchless implies 
 > > being
 > >   > compatible with all supported releases.
 > >   >
 > >   > >
 > >   > > During QA feedback sessions at Vancouver Summit, there was feedback 
 > > to coordinating the release of all Tempest plugins and Tempest [4] (also 
 > > amotoki talked to me on this as neutron-tempest-plugin is planning their 
 > > first release). Idea is to release/tag all the Tempest plugins and Tempest 
 > > together so that specific release/tag can be identified as compatible 
 > > version of all the Plugins and Tempest for testing the complete stack. 
 > > That way user can get to know what version of Tempest Plugins is 
 > > compatible with what version of Tempest.
 > >   > >
 > >   > > For above use case, we need some coordinated release model among 
 > > Tempest and all the Tempest Plugins. There should be single release of all 
 > > Tempest Plugins with well defined tag whenever any Tempest release is 
 > > happening.  For Example, Tempest version 19.0.0 is to mark the "support of 
 > > the Rocky release". When releasing the Tempest 19.0, we will release all 
 > > the Tempest plugins also to tag the compatibility of plugins with Tempest 
 > > for "support of the Rocky release".
 > >   > >
 > >   > > One way to make this coordinated release (just a initial thought):
 > >   > > 1. Release Each Tempest Plugins whenever there is any major version 
 > > release of Tempest (like marking the support of OpenStack release in 
 > > Tempest, EOL of OpenStack release in Tempest)
 > >   > >  1.1 Each plugin or Tempest can do their intermediate release of 
 > > minor version change which are in backward compatible way.
 > >   > >  1.2 This coordinated Release can be started from latest Tempest 
 > > Version for simple reading.  Like if we start this coordinated release 
 > > from Tempest version 19.0.0 then,
 > >   > >  each plugins will be released as 19.0.0 and so on.
 > >   > >
 > >   > > Giving the above background and use case of this coordinated release,
 > >   > > A) I would like to ask each plugins owner if you are agree on this 
 > > coordinated release?  If no, please give more feedback or issue we can 
 > > face due to this coordinated release.
 > >   >
 > >   > Disclaimer: I'm not the PTL.
 > >   >
 > >   > Similarly to Luigi, I don't feel well about forcing a plugin release 
 > > at the same
 > >   > time as a tempest release, UNLESS tempest folks are going to 
 > > coordinate their
 > >   > releases with all how-many-do-we-have plugins. What I'd like to avoid 
 > > is cutting
 > >   > a release in the middle of a patch chain or so

Re: [openstack-dev] [Openstack-operators] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins

2018-06-26 Thread Ghanshyam Mann
  On Wed, 27 Jun 2018 10:19:17 +0900 Ghanshyam Mann 
 wrote  
 > ++ operator ML
 > 
 >   On Wed, 27 Jun 2018 10:17:33 +0900 Ghanshyam Mann 
 >  wrote  
 >  >  
 >  >  
 >  >  
 >  >   On Tue, 26 Jun 2018 23:12:30 +0900 Doug Hellmann 
 >  wrote   
 >  >  > Excerpts from Matthew Treinish's message of 2018-06-26 09:52:09 -0400: 
 >  >  > > On Tue, Jun 26, 2018 at 08:53:21AM -0400, Doug Hellmann wrote: 
 >  >  > > > Excerpts from Andrea Frittoli's message of 2018-06-26 13:35:11 
 > +0100: 
 >  >  > > > > On Tue, 26 Jun 2018, 1:08 pm Thierry Carrez, 
 >  wrote: 
 >  >  > > > >  
 >  >  > > > > > Dmitry Tantsur wrote: 
 >  >  > > > > > > [...] 
 >  >  > > > > > > My suggestion: tempest has to be compatible with all 
 > supported releases 
 >  >  > > > > > > (of both services and plugins) OR be branched. 
 >  >  > > > > > > [...] 
 >  >  > > > > > I tend to agree with Dmitry... We have a model for things that 
 > need 
 >  >  > > > > > release alignment, and that's the cycle-bound series. The 
 > reason tempest 
 >  >  > > > > > is branchless was because there was no compatibility issue. If 
 > the split 
 >  >  > > > > > of tempest plugins introduces a potential incompatibility, 
 > then I would 
 >  >  > > > > > prefer aligning tempest to the existing model rather than 
 > introduce a 
 >  >  > > > > > parallel tempest-specific cycle just so that tempest can stay 
 >  >  > > > > > release-independent... 
 >  >  > > > > > 
 >  >  > > > > > I seem to remember there were drawbacks in branching tempest, 
 > though... 
 >  >  > > > > > Can someone with functioning memory brain cells summarize them 
 > again ? 
 >  >  > > > > > 
 >  >  > > > >  
 >  >  > > > >  
 >  >  > > > > Branchless Tempest enforces api stability across branches. 
 >  >  > > >  
 >  >  > > > I'm sorry, but I'm having a hard time taking this statement 
 > seriously 
 >  >  > > > when the current source of tension is that the Tempest API itself 
 >  >  > > > is breaking for its plugins. 
 >  >  > > >  
 >  >  > > > Maybe rather than talking about how to release compatible things 
 >  >  > > > together, we should go back and talk about why Tempest's API is 
 > changing 
 >  >  > > > in a way that can't be made backwards-compatible. Can you give 
 > some more 
 >  >  > > > detail about that? 
 >  >  > > >  
 >  >  > >  
 >  >  > > Well it's not, if it did that would violate all the stability 
 > guarantees 
 >  >  > > provided by Tempest's library and plugin interface. I've not ever 
 > heard of 
 >  >  > > these kind of backwards incompatibilities in those interfaces and we 
 > go to 
 >  >  > > all effort to make sure we don't break them. Where did the idea that 
 >  >  > > backwards incompatible changes where being introduced come from? 
 >  >  >  
 >  >  > In his original post, gmann said, "There might be some changes in 
 >  >  > Tempest which might not work with older version of Tempest Plugins." 
 >  >  > I was surprised to hear that, but I'm not sure how else to interpret 
 >  >  > that statement. 
 >  >  
 >  > I did not mean to say that Tempest will introduce the changes in backward 
 > incompatible way which can break plugins. That cannot happen as all plugins 
 > and tempest are branchless and they are being tested with master Tempest so 
 > if we change anything backward incompatible then it break the plugins gate. 
 > Even we have to remove any deprecated interfaces from Tempest, we fix all 
 > plugins first like - 
 > https://review.openstack.org/#/q/topic:remove-support-of-cinder-v1-api+(status:open+OR+status:merged)
 >   
 >  >  
 >  > What I mean to say here is that adding new or removing deprecated 
 > interface in Tempest might not work with all released version or unreleased 
 > Plugins. That point is from point of view of using Tempest and Plugins in 
 > production cloud testing not gate(where we keep the compatibility). 
 > Production Cloud user use Tempest cycle based version. Pike based Cloud will 
 > be tested by Tempest 17.0.0 not latest version (though latest version might 
 > work).  
 >  >  
 >  >

Re: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins

2018-06-26 Thread Ghanshyam Mann
++ operator ML

  On Wed, 27 Jun 2018 10:17:33 +0900 Ghanshyam Mann 
 wrote  
 >  
 >  
 >  
 >   On Tue, 26 Jun 2018 23:12:30 +0900 Doug Hellmann 
 >  wrote   
 >  > Excerpts from Matthew Treinish's message of 2018-06-26 09:52:09 -0400: 
 >  > > On Tue, Jun 26, 2018 at 08:53:21AM -0400, Doug Hellmann wrote: 
 >  > > > Excerpts from Andrea Frittoli's message of 2018-06-26 13:35:11 +0100: 
 >  > > > > On Tue, 26 Jun 2018, 1:08 pm Thierry Carrez, 
 >  wrote: 
 >  > > > >  
 >  > > > > > Dmitry Tantsur wrote: 
 >  > > > > > > [...] 
 >  > > > > > > My suggestion: tempest has to be compatible with all supported 
 > releases 
 >  > > > > > > (of both services and plugins) OR be branched. 
 >  > > > > > > [...] 
 >  > > > > > I tend to agree with Dmitry... We have a model for things that 
 > need 
 >  > > > > > release alignment, and that's the cycle-bound series. The reason 
 > tempest 
 >  > > > > > is branchless was because there was no compatibility issue. If 
 > the split 
 >  > > > > > of tempest plugins introduces a potential incompatibility, then I 
 > would 
 >  > > > > > prefer aligning tempest to the existing model rather than 
 > introduce a 
 >  > > > > > parallel tempest-specific cycle just so that tempest can stay 
 >  > > > > > release-independent... 
 >  > > > > > 
 >  > > > > > I seem to remember there were drawbacks in branching tempest, 
 > though... 
 >  > > > > > Can someone with functioning memory brain cells summarize them 
 > again ? 
 >  > > > > > 
 >  > > > >  
 >  > > > >  
 >  > > > > Branchless Tempest enforces api stability across branches. 
 >  > > >  
 >  > > > I'm sorry, but I'm having a hard time taking this statement seriously 
 >  > > > when the current source of tension is that the Tempest API itself 
 >  > > > is breaking for its plugins. 
 >  > > >  
 >  > > > Maybe rather than talking about how to release compatible things 
 >  > > > together, we should go back and talk about why Tempest's API is 
 > changing 
 >  > > > in a way that can't be made backwards-compatible. Can you give some 
 > more 
 >  > > > detail about that? 
 >  > > >  
 >  > >  
 >  > > Well it's not, if it did that would violate all the stability 
 > guarantees 
 >  > > provided by Tempest's library and plugin interface. I've not ever heard 
 > of 
 >  > > these kind of backwards incompatibilities in those interfaces and we go 
 > to 
 >  > > all effort to make sure we don't break them. Where did the idea that 
 >  > > backwards incompatible changes where being introduced come from? 
 >  >  
 >  > In his original post, gmann said, "There might be some changes in 
 >  > Tempest which might not work with older version of Tempest Plugins." 
 >  > I was surprised to hear that, but I'm not sure how else to interpret 
 >  > that statement. 
 >  
 > I did not mean to say that Tempest will introduce the changes in backward 
 > incompatible way which can break plugins. That cannot happen as all plugins 
 > and tempest are branchless and they are being tested with master Tempest so 
 > if we change anything backward incompatible then it break the plugins gate. 
 > Even we have to remove any deprecated interfaces from Tempest, we fix all 
 > plugins first like - 
 > https://review.openstack.org/#/q/topic:remove-support-of-cinder-v1-api+(status:open+OR+status:merged)
 >   
 >  
 > What I mean to say here is that adding new or removing deprecated interface 
 > in Tempest might not work with all released version or unreleased Plugins. 
 > That point is from point of view of using Tempest and Plugins in production 
 > cloud testing not gate(where we keep the compatibility). Production Cloud 
 > user use Tempest cycle based version. Pike based Cloud will be tested by 
 > Tempest 17.0.0 not latest version (though latest version might work).  
 >  
 > This thread is not just for gate testing point of view (which seems to be 
 > always interpreted), this is more for user using Tempest and Plugins for 
 > their cloud testing. I am looping  operator mail list also which i forgot in 
 > initial post.  
 >  
 > We do not have any tag/release from plugins to know what version of plugin 
 > can work with what version of tempest. For Example If There is new interface 
 > intro

Re: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins

2018-06-26 Thread Ghanshyam Mann



  On Tue, 26 Jun 2018 23:12:30 +0900 Doug Hellmann  
wrote  
 > Excerpts from Matthew Treinish's message of 2018-06-26 09:52:09 -0400:
 > > On Tue, Jun 26, 2018 at 08:53:21AM -0400, Doug Hellmann wrote:
 > > > Excerpts from Andrea Frittoli's message of 2018-06-26 13:35:11 +0100:
 > > > > On Tue, 26 Jun 2018, 1:08 pm Thierry Carrez,  
 > > > > wrote:
 > > > > 
 > > > > > Dmitry Tantsur wrote:
 > > > > > > [...]
 > > > > > > My suggestion: tempest has to be compatible with all supported 
 > > > > > > releases
 > > > > > > (of both services and plugins) OR be branched.
 > > > > > > [...]
 > > > > > I tend to agree with Dmitry... We have a model for things that need
 > > > > > release alignment, and that's the cycle-bound series. The reason 
 > > > > > tempest
 > > > > > is branchless was because there was no compatibility issue. If the 
 > > > > > split
 > > > > > of tempest plugins introduces a potential incompatibility, then I 
 > > > > > would
 > > > > > prefer aligning tempest to the existing model rather than introduce a
 > > > > > parallel tempest-specific cycle just so that tempest can stay
 > > > > > release-independent...
 > > > > >
 > > > > > I seem to remember there were drawbacks in branching tempest, 
 > > > > > though...
 > > > > > Can someone with functioning memory brain cells summarize them again 
 > > > > > ?
 > > > > >
 > > > > 
 > > > > 
 > > > > Branchless Tempest enforces api stability across branches.
 > > > 
 > > > I'm sorry, but I'm having a hard time taking this statement seriously
 > > > when the current source of tension is that the Tempest API itself
 > > > is breaking for its plugins.
 > > > 
 > > > Maybe rather than talking about how to release compatible things
 > > > together, we should go back and talk about why Tempest's API is changing
 > > > in a way that can't be made backwards-compatible. Can you give some more
 > > > detail about that?
 > > > 
 > > 
 > > Well it's not, if it did that would violate all the stability guarantees
 > > provided by Tempest's library and plugin interface. I've not ever heard of
 > > these kind of backwards incompatibilities in those interfaces and we go to
 > > all effort to make sure we don't break them. Where did the idea that
 > > backwards incompatible changes where being introduced come from?
 > 
 > In his original post, gmann said, "There might be some changes in
 > Tempest which might not work with older version of Tempest Plugins."
 > I was surprised to hear that, but I'm not sure how else to interpret
 > that statement.

I did not mean to say that Tempest will introduce the changes in backward 
incompatible way which can break plugins. That cannot happen as all plugins and 
tempest are branchless and they are being tested with master Tempest so if we 
change anything backward incompatible then it break the plugins gate. Even we 
have to remove any deprecated interfaces from Tempest, we fix all plugins first 
like - 
https://review.openstack.org/#/q/topic:remove-support-of-cinder-v1-api+(status:open+OR+status:merged)
 

What I mean to say here is that adding new or removing deprecated interface in 
Tempest might not work with all released version or unreleased Plugins. That 
point is from point of view of using Tempest and Plugins in production cloud 
testing not gate(where we keep the compatibility). Production Cloud user use 
Tempest cycle based version. Pike based Cloud will be tested by Tempest 17.0.0 
not latest version (though latest version might work). 

This thread is not just for gate testing point of view (which seems to be 
always interpreted), this is more for user using Tempest and Plugins for their 
cloud testing. I am looping  operator mail list also which i forgot in initial 
post. 

We do not have any tag/release from plugins to know what version of plugin can 
work with what version of tempest. For Example If There is new interface 
introduced by Tempest 19.0.0 and pluginX start using it. Now it can create 
issues for pluginX in both release model 1. plugins with no release (I will 
call this PluginNR), 2. plugins with independent release (I will call it 
PluginIR). 

Users (Not Gate) will face below issues:
- User cannot use PluginNR with Tempest <19.0.0 (where that new interface was 
not present). And there is no PluginNR release/tag as this is unreleased and 
not branched software. 
- User cannot find a PluginIR particular tag/release which can work with 
tempest <19.0.0 (where that new interface was not present). Only way for user 
to make it work is to manually find out the PluginIR tag/commit before PluginIR 
started consuming the new interface. 

Let me make it more clear via diagram: 


 PluginNR   
 PluginIR


Re: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins

2018-06-26 Thread Ghanshyam Mann



  On Tue, 26 Jun 2018 18:37:42 +0900 Dmitry Tantsur  
wrote  
 > On 06/26/2018 11:18 AM, Ghanshyam Mann wrote:
 > > Hello Everyone,
 > > 
 > > In Queens cycle,  community goal to split the Tempest Plugin has been 
 > > completed [1] and i think almost all the projects have separate repo for 
 > > tempest plugin [2]. Which means each tempest plugins are being separated 
 > > from their project release model.  Few projects have started the 
 > > independent release model for their plugins like kuryr-tempest-plugin, 
 > > ironic-tempest-plugin etc [3].  I think neutron-tempest-plugin also 
 > > planning as chatted with amotoki.
 > > 
 > > There might be some changes in Tempest which might not work with older 
 > > version of Tempest Plugins.  For example, If I am testing any production 
 > > cloud which has Nova, Neutron, Cinder, Keystone , Aodh, Congress etc  i 
 > > will be using Tempest and Aodh's , Congress's Tempest plugins. With 
 > > Independent release model of each Tempest Plugins, there might be chance 
 > > that the Aodh's or Congress's Tempest plugin versions are not compatible 
 > > with latest/known Tempest versions. It will become hard to find the 
 > > compatible tag/release of Tempest and Tempest Plugins or in some cases i 
 > > might need to patch up the things.
 > 
 > FWIW this is solved by stable branches for all other projects. If we cannot 
 > keep 
 > Tempest compatible with all supported branches, we should back off our 
 > decision 
 > to make it branchless. The very nature of being branchless implies being 
 > compatible with all supported releases.
 > 
 > > 
 > > During QA feedback sessions at Vancouver Summit, there was feedback to 
 > > coordinating the release of all Tempest plugins and Tempest [4] (also 
 > > amotoki talked to me on this as neutron-tempest-plugin is planning their 
 > > first release). Idea is to release/tag all the Tempest plugins and Tempest 
 > > together so that specific release/tag can be identified as compatible 
 > > version of all the Plugins and Tempest for testing the complete stack. 
 > > That way user can get to know what version of Tempest Plugins is 
 > > compatible with what version of Tempest.
 > > 
 > > For above use case, we need some coordinated release model among Tempest 
 > > and all the Tempest Plugins. There should be single release of all Tempest 
 > > Plugins with well defined tag whenever any Tempest release is happening.  
 > > For Example, Tempest version 19.0.0 is to mark the "support of the Rocky 
 > > release". When releasing the Tempest 19.0, we will release all the Tempest 
 > > plugins also to tag the compatibility of plugins with Tempest for "support 
 > > of the Rocky release".
 > > 
 > > One way to make this coordinated release (just a initial thought):
 > > 1. Release Each Tempest Plugins whenever there is any major version 
 > > release of Tempest (like marking the support of OpenStack release in 
 > > Tempest, EOL of OpenStack release in Tempest)
 > >  1.1 Each plugin or Tempest can do their intermediate release of minor 
 > > version change which are in backward compatible way.
 > >  1.2 This coordinated Release can be started from latest Tempest 
 > > Version for simple reading.  Like if we start this coordinated release 
 > > from Tempest version 19.0.0 then,
 > >  each plugins will be released as 19.0.0 and so on.
 > > 
 > > Giving the above background and use case of this coordinated release,
 > > A) I would like to ask each plugins owner if you are agree on this 
 > > coordinated release?  If no, please give more feedback or issue we can 
 > > face due to this coordinated release.
 > 
 > Disclaimer: I'm not the PTL.
 > 
 > Similarly to Luigi, I don't feel well about forcing a plugin release at the 
 > same 
 > time as a tempest release, UNLESS tempest folks are going to coordinate 
 > their 
 > releases with all how-many-do-we-have plugins. What I'd like to avoid is 
 > cutting 
 > a release in the middle of a patch chain or some refactoring just because 
 > tempest happened to have a release right now.

I understand your point. But we can avoid that case if we only coordinate on 
major version bump only. as i mentioned in 1.2 point, Tempest and Tempest 
plugins can do their intermediate release anytime which are nothing but 
backward compatible release. In this proposed model, we can do a coordinated 
release for major version bump only which is happening only on OpenStack 
release and EOL of any stable branch. 

Or I am all open to have another release model which can be best suited for all 
pl

Re: [openstack-dev] [qa][tempest-plugins][release][tc][ptl]: Coordinated Release Model proposal for Tempest & Tempest Plugins

2018-06-26 Thread Ghanshyam Mann



  On Tue, 26 Jun 2018 18:28:03 +0900 Luigi Toscano  
wrote  
 > On Tuesday, 26 June 2018 11:18:52 CEST Ghanshyam Mann wrote: 
 > > Hello Everyone, 
 > >  
 > > In Queens cycle,  community goal to split the Tempest Plugin has been 
 > > completed [1] and i think almost all the projects have separate repo for 
 > > tempest plugin [2]. Which means each tempest plugins are being separated 
 > > from their project release model.  Few projects have started the 
 > > independent release model for their plugins like kuryr-tempest-plugin, 
 > > ironic-tempest-plugin etc [3].  I think neutron-tempest-plugin also 
 > > planning as chatted with amotoki. 
 > >  
 > > There might be some changes in Tempest which might not work with older 
 > > version of Tempest Plugins.  For example, If I am testing any production 
 > > cloud which has Nova, Neutron, Cinder, Keystone , Aodh, Congress etc  i 
 > > will be using Tempest and Aodh's , Congress's Tempest plugins. With 
 > > Independent release model of each Tempest Plugins, there might be chance 
 > > that the Aodh's or Congress's Tempest plugin versions are not compatible 
 > > with latest/known Tempest versions. It will become hard to find the 
 > > compatible tag/release of Tempest and Tempest Plugins or in some cases i 
 > > might need to patch up the things. 
 > >  
 > > During QA feedback sessions at Vancouver Summit, there was feedback to 
 > > coordinating the release of all Tempest plugins and Tempest [4] (also 
 > > amotoki talked to me on this as neutron-tempest-plugin is planning their 
 > > first release). Idea is to release/tag all the Tempest plugins and Tempest 
 > > together so that specific release/tag can be identified as compatible 
 > > version of all the Plugins and Tempest for testing the complete stack. 
 > > That 
 > > way user can get to know what version of Tempest Plugins is compatible 
 > > with 
 > > what version of Tempest. 
 > >  
 > > For above use case, we need some coordinated release model among Tempest 
 > > and 
 > > all the Tempest Plugins. There should be single release of all Tempest 
 > > Plugins with well defined tag whenever any Tempest release is happening.  
 > > For Example, Tempest version 19.0.0 is to mark the "support of the Rocky 
 > > release". When releasing the Tempest 19.0, we will release all the Tempest 
 > > plugins also to tag the compatibility of plugins with Tempest for "support 
 > > of the Rocky release". 
 > >  
 > > One way to make this coordinated release (just a initial thought): 
 > > 1. Release Each Tempest Plugins whenever there is any major version 
 > > release 
 > > of Tempest (like marking the support of OpenStack release in Tempest, EOL 
 > > of OpenStack release in Tempest) 1.1 Each plugin or Tempest can do their 
 > > intermediate release of minor version change which are in backward 
 > > compatible way. 1.2 This coordinated Release can be started from latest 
 > > Tempest Version for simple reading.  Like if we start this coordinated 
 > > release from Tempest version 19.0.0 then, each plugins will be released as 
 > > 19.0.0 and so on. 
 > >  
 > > Giving the above background and use case of this coordinated release, 
 > > A) I would like to ask each plugins owner if you are agree on this 
 > > coordinated release?  If no, please give more feedback or issue we can 
 > > face 
 > > due to this coordinated release. 
 > >  
 >  
 > The Sahara PTL may disagree with me, but I disagree with forcing each team 
 > to  
 > release in a coordinate model. 
 >  
 > I already take care of releasing sahara-tests, which contains both the 
 > tempest  
 > plugin and the scenario tests, when a new major version of OpenStack is  
 > released, keeping the compatibility with the relevant versions of Tempest. 
 >  
 > tl;dr I agree with having Tempest plugins follow the same lifecycle of  
 > Tempest, but please allow me to do so manually. 

But with coordinated release,  we can make sure we have particular tags which 
can be used in OpenStack Complete testing. With independent release model, 
there is no guarantee that all tempest plugins will be compatible with Tempest 
versions. 

-gmann  

 >  
 >  
 > --  
 > Luigi 
 >  
 >  
 > 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   4   >