Re: [openstack-dev] [tricircle]Nominating Victor Morales as Tricircle Core

2017-03-01 Thread joehuang
Thank you all for your voting.

Victor, you are now included in the core team.

Best Regards
Chaoyi Huang (joehuang)



From: Yipei Niu [newy...@gmail.com]
Sent: 01 March 2017 15:06
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tricircle]Nominating Victor Morales as Tricircle 
Core

+1.


From: joehuang
Sent: 01 March 2017 11:44
To: OpenStack Development Mailing List (not for usage questions)
Subject: RE: [openstack-dev] [tricircle]Nominating Victor Morales as Tricircle 
Core

+1 from my vote.

Best Regards
Chaoyi Huang (joehuang)

From: Vega Cai [luckyveg...@gmail.com]
Sent: 01 March 2017 11:42
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tricircle]Nominating Victor Morales as Tricircle 
Core

+1

Zhiyuan

On Wed, 1 Mar 2017 at 11:41 Devale, Sindhu 
> wrote:
+1

Sindhu

From: joehuang >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, February 28, 2017 at 8:38 PM
To: openstack-dev 
>
Subject: [openstack-dev] [tricircle]Nominating Victor Morales as Tricircle Core

Hi Team,

Victor Morales has made many review contributions[1] to Trircircle since the 
Ocata cycle, and he also created the python-tricircleclient sub-project[2]. I 
would like to nominate him to be Tricircle core reviewer. I really think his 
experience will help us substantially improve Trircircle.

 It's now time to vote :)

[1] http://stackalytics.com/report/contribution/tricircle/120
[2] https://git.openstack.org/cgit/openstack/python-tricircleclient/


Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
BR
Zhiyuan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PBR 2.0.0 release *may* cause gate failures

2017-03-01 Thread Tony Breeds
On Thu, Mar 02, 2017 at 06:07:47PM +1100, Tony Breeds wrote:
> On Thu, Mar 02, 2017 at 05:01:43PM +1100, Tony Breeds wrote:
> 
> > And yes there are now plenty of pep8 jobs that are failing with PBR 2.0.0
> > 
> > We can't revert the requirements change that landed which means that 
> > projects
> > using old hacking versions are going to need an update :(
> 
> One option may be to release 0.10.3 of hacking that removes the cap on pbr.  I
> think this would fix the issues on master but I *think* it will cause problems
> on ,at least, stable/mitaka where 0.10.x is still the preferred version.
> 
> For projects that have constraints support things will be fine as the version
> selected for PBR will come from upper-constraints.txt and hacking from
> requirements so those projects will get pbr == 1.8.1 and hacking 0.10.3 
> #winning.
> 
> For projects that don't have constraints support which is still quite a few[1]
> they'll end up with pbr 2.0.0 which will almost certainly break things.
> Probably not in behaviour but more likely with similar VersionConflicts.

I know I'm talking to myself .

A project on $branch without constraints is going to get pbr 2.0.0 and then hit
version conflicts with projects that have pbr <2.0.0 caps *anyway* regardless
of what hacking says right?

So removing the pbr cap in hacking doesn't make things worse for stable
branches but it does make things better for master?

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PBR 2.0.0 release *may* cause gate failures

2017-03-01 Thread Tony Breeds
On Thu, Mar 02, 2017 at 05:01:43PM +1100, Tony Breeds wrote:

> And yes there are now plenty of pep8 jobs that are failing with PBR 2.0.0
> 
> We can't revert the requirements change that landed which means that projects
> using old hacking versions are going to need an update :(

One option may be to release 0.10.3 of hacking that removes the cap on pbr.  I
think this would fix the issues on master but I *think* it will cause problems
on ,at least, stable/mitaka where 0.10.x is still the preferred version.

For projects that have constraints support things will be fine as the version
selected for PBR will come from upper-constraints.txt and hacking from
requirements so those projects will get pbr == 1.8.1 and hacking 0.10.3 
#winning.

For projects that don't have constraints support which is still quite a few[1]
they'll end up with pbr 2.0.0 which will almost certainly break things.
Probably not in behaviour but more likely with similar VersionConflicts.

Yours Tony.

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-March/113175.html


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][swg] per-project "Business only" moderated mailing lists

2017-03-01 Thread Mike Perez
On 13:04 Feb 27, Clint Byrum wrote:
> Excerpts from Doug Hellmann's message of 2017-02-27 15:43:12 -0500:
> > 
> > As a person who sends a lot of process-driven email to this list,
> > it is not working for my needs to communicate with others.
> > 
> > Over the past few cycles when I was the release PTL, I always had
> > a couple of PTLs say there was too much email on this list for them
> > to read, and that they had not read my instructions for managing
> > releases. That resulted in us having to train folks at the last
> > minute, remind them of deadlines, deal with them missing deadlines,
> > and otherwise increased the release team's workload.
> > 
> > It is possible the situation will improve now that the automation
> > work is mostly complete and we expect to see fewer significant
> > changes in the release workflow. That still leaves quite a few
> > people regularly surprised by deadlines, though.
> > 
> 
> The problem above is really the krux of it. Whether or not you can keep
> up with the mailing list can be an unknown, unknown. Even now, those
> who can't actually handle the mailing list traffic are in fact likely
> missing this thread about whether or not people can handle the mailing
> list traffic (credit fungi for pointing out this irony to me on IRC).

I feel like this subject comes up in different forms.

FWIW, the dev digest does cover information like release deadlines, elections,
etc. Here's an example:

https://www.openstack.org/blog/2017/01/openstack-developer-mailing-list-digest-20170120/

For anyone that feels like the current ML system is not working and you're
missing important information, do yourself a favor and at least double check
the digest. Suggestions are welcome!


-- 
Mike Perez


pgpPPZaNst4PF.pgp
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Proposing Michal Gershenzon to the core team

2017-03-01 Thread Renat Akhmerov
Thank you guys!

Michal, you’re now included to the core team.

Renat Akhmerov
@Nokia

> On 2 Mar 2017, at 03:50, Lingxian Kong  wrote:
> 
> +1, she indeed has been doing great contribution to Mistral, welcome, Michal 
> :-)
> 
> 
> Cheers,
> Lingxian Kong (Larry)
> 
> On Thu, Mar 2, 2017 at 5:47 AM, Renat Akhmerov  > wrote:
> Hi,
> 
> Based on the stats of Michal Gershenzon in Ocata cycle I’d like to promote 
> her to the core team.
> Michal works at Nokia CloudBand and being a CloudBand engineer she knows 
> Mistral very well
> as a user and behind the scenes helped find a lot of bugs and make countless 
> number of
> improvements, especially in performance.
> 
> Overall, she is a deep thinker, cares about details, always has an unusual 
> angle of view on any
> technical problem. She is one of a few people that I’m aware of who I could 
> call a Mistral expert.
> She also participates in almost every community meeting in IRC.
> 
> In Ocata she improved her statistics pretty significantly (e.g. ~60 reviews 
> although the cycle was
> very short) and is keeping up the good pace now. Also, Michal is officially 
> planning to allocate
> more time for upstream development in Pike
> 
> I believe Michal would be a great addition for the Mistral core team.
> 
> Please let me know if you agree with that.
> 
> Thanks
> 
> [1] 
> http://stackalytics.com/?module=mistral-group=ocata_id=michal-gershenzon
>  
> 
> 
> Renat Akhmerov
> @Nokia
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PBR 2.0.0 release *may* cause gate failures

2017-03-01 Thread Tony Breeds
On Thu, Mar 02, 2017 at 04:41:14PM +1100, Tony Breeds wrote:
> On Thu, Mar 02, 2017 at 03:48:05PM +1100, Ian Wienand wrote:
> > On 03/02/2017 01:13 AM, Doug Hellmann wrote:
> > > Tony identified caps in 5 OpenStack community projects (see [1]) as well
> > > as powervm and python-jsonpath-rw-ext. Pull requests to those other
> > > projects are linked from the bug [2].
> > 
> > > [1] https://review.openstack.org/#/q/topic:bug/1668848
> > 
> > Am I nuts or was pbr itself the only one forgotten?
> 
> Your quite right.  That points out that there are probably lots of repos still
> with an old hacking that need to be updated.

And yes there are now plenty of pep8 jobs that are failing with PBR 2.0.0

We can't revert the requirements change that landed which means that projects
using old hacking versions are going to need an update :(

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PBR 2.0.0 release *may* cause gate failures

2017-03-01 Thread Tony Breeds
On Thu, Mar 02, 2017 at 03:48:05PM +1100, Ian Wienand wrote:
> On 03/02/2017 01:13 AM, Doug Hellmann wrote:
> > Tony identified caps in 5 OpenStack community projects (see [1]) as well
> > as powervm and python-jsonpath-rw-ext. Pull requests to those other
> > projects are linked from the bug [2].
> 
> > [1] https://review.openstack.org/#/q/topic:bug/1668848
> 
> Am I nuts or was pbr itself the only one forgotten?

Your quite right.  That points out that there are probably lots of repos still
with an old hacking that need to be updated.

/me goes to write a script that check that thing.

> I filed
> 
>  https://review.openstack.org/#/c/440010
> 
> under the same topic

Thanks

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] [docs][release][ptl] Adding docs to the release schedule

2017-03-01 Thread Darren Chan



On 2/3/17 5:07 am, Alexandra Settle wrote:


On 3/1/17, 5:58 PM, "John Dickinson"  wrote:

 
 
 On 1 Mar 2017, at 9:52, Alexandra Settle wrote:
 
 > Hi everyone,

 >
 > I would like to propose that we introduce a “Review documentation” 
period on the release schedule.
 >
 > We would formulate it as a deadline, so that it fits in the schedule and 
making it coincide with the RC1 deadline.
 >
 > For projects that are not following the milestones, we would translate 
this new inclusion literally, so if you would like your project to be documented 
at docs.o.o, then doc must be introduced and reviewed one month before the branch 
is cut.
 
 Which docs are these? There are several different sets of docs that are hosted on docs.o.o that are managed within a project repo. Are you saying those won't get pushed to

 docs.o.o if they are patched within a month of the cycle release?

The only sets of docs that are published on the docs.o.o site that are managed 
in project-specific repos is the project-specific installation guides. That 
management is entirely up to the team themselves, but I would like to push for 
the integration of a “documentation review” period to ensure that those teams 
are reviewing their docs in their own tree.

This is a preferential suggestion, not a demand. I cannot make you review your 
documentation at any given period.

The ‘month before’ that I refer to would be for introduction of documentation 
and a review period. I will not stop any documentation being pushed to the repo 
unless, of course, it is untested and breaks the installation process.
 
 
 >

 > In the last week since we released Ocata, it has become increasingly 
apparent that the documentation was not updated from the development side. We were 
not aware of a lot of new enhancements, features, or major bug fixes for certain 
projects. This means we have released with incorrect/out-of-date documentation. 
This is not only an unfortunately bad reflection on our team, but on the project 
teams themselves.
 >
FYI, there's a few bugs for the Configuration Reference mentioning 
options for some services require updating. I've gone through the doc 
and created additional bugs and included the relevant PTL and docs liaison.

 > The new inclusion to the schedule may seem unnecessary, but a lot of 
people rely on this and the PTL drives milestones from this schedule.
 >
 > From our side, I endeavor to ensure our release managers are working 
harder to ping and remind doc liaisons and PTLs to ensure the documentation is 
appropriately updated and working to ensure this does not happen in the future.
 >
 > Thanks,
 >
 > Alex
 
 
 > __

 > OpenStack Development Mailing List (not for usage questions)
 > Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


___
OpenStack-docs mailing list
openstack-d...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment][TripleO][kolla][ansible][fuel] Next steps for cross project collaboration

2017-03-01 Thread Brandon B. Jozsa
+1 for the monthly meetings and a long standing, cross-team, collaborative 
etherpad.

It’s challenging for some projects who already collaborate heavily with 
communities outside of OpenStack to take on additional heavy meeting cycles.

The PTG deployment cross-team collaboration was really awesome. I can’t wait to 
see what this team is able to do together! Very happy to be a part of this 
effort!

Brandon



On March 1, 2017 at 11:30:24 PM, Steven Dake (stdake) 
(std...@cisco.com) wrote:
Andy,

A monthly meeting as I have voiced before in this thread is not sufficient to 
enable good cross-project collaboration.  To rectify this, when Steve Hardy 
started this thread, I started the process of creating an IRC channel in 
#openstack-deployment.

I understand there has been one voice on this thread that wants #openstack-dev 
to be used for that communication.  There have been several others that do want 
a specific IRC channel.  Generally, there is consensus to use the 
#openstack-deployment channel as that is the name of the tag used in the 
mailing list.

Anyone is welcome.  Hope to see folks there.

Regards
-steve
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PBR 2.0.0 release *may* cause gate failures

2017-03-01 Thread Ian Wienand

On 03/02/2017 01:13 AM, Doug Hellmann wrote:

Tony identified caps in 5 OpenStack community projects (see [1]) as well
as powervm and python-jsonpath-rw-ext. Pull requests to those other
projects are linked from the bug [2].



[1] https://review.openstack.org/#/q/topic:bug/1668848


Am I nuts or was pbr itself the only one forgotten?

I filed

 https://review.openstack.org/#/c/440010

under the same topic

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment][TripleO][kolla][ansible][fuel] Next steps for cross project collaboration

2017-03-01 Thread Steven Dake (stdake)
Andy,

A monthly meeting as I have voiced before in this thread is not sufficient to 
enable good cross-project collaboration.  To rectify this, when Steve Hardy 
started this thread, I started the process of creating an IRC channel in 
#openstack-deployment.

I understand there has been one voice on this thread that wants #openstack-dev 
to be used for that communication.  There have been several others that do want 
a specific IRC channel.  Generally, there is consensus to use the 
#openstack-deployment channel as that is the name of the tag used in the 
mailing list.

Anyone is welcome.  Hope to see folks there.

Regards
-steve


From: Andy McCrae 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, March 1, 2017 at 8:19 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [deployment][TripleO][kolla][ansible][fuel] Next 
steps for cross project collaboration

On 28 February 2017 at 08:25, Flavio Percoco 
> wrote:
On 28/02/17 08:01 +, Jesse Pretorius wrote:
On 2/28/17, 12:52 AM, "Michał Jastrzębski" 
> wrote:

   I think instead of adding yet-another-irc-channel how about create
   weekly meetings? We can rant in scheduled time and it probably will
   get more attention

Happy to meet, in fact I think it’ll be important for keeping things on track – 
however weekly is too often. I think once a month at most is perfectly fine.

Yes, monthly prolly better than weekly in this cas (if we ever decide to have 
these meetings).

Agreed - monthly sounds like a good start, we can always see how it goes and 
change up as required.
I think if the only change is that we have a WG added to the Wiki and a 
[deployments] tag added for the ML,
then we can't really expect to change much or have a large impact.

I'd love to see this build some momentum and come up with useful outcomes, 
which I think the PTG session
started really nicely. There are clearly quite a few common issues that we can 
address better as a collective.

Andy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Significant update to ARA (Ansible Run Analysis)

2017-03-01 Thread David Moreau Simard
Hi,

Just a heads up for projects that are currently using ARA [1]: I've
tagged and extensively tested a release candidate which contains a
serious UI overhaul.
At first glance, projects like devstack-gate [2], openstack-ansible
[3], openstack-ansible-tests [4] and kolla-ansible [5] seem to be
working well with this new version.

I figured I'd send a notification prior to tagging the release due to
the breadth of the UI changes, it's completely different and I wanted
to hear if there were any concerns or issues before I went ahead.

Please let me know if you have any questions or notice anything odd in
the linked reports !

[1]: https://github.com/openstack/ara
[2]: 
http://logs.openstack.org/66/439966/1/check/gate-grenade-dsvm-neutron-multinode-ubuntu-xenial/f7b078e/logs/ara/reports/index.html
[3]: 
http://logs.openstack.org/24/396324/9/check/gate-openstack-ansible-openstack-ansible-ceph-centos-7-nv/f811e1a/logs/ara/reports/index.html
[4]: 
http://logs.openstack.org/62/439962/1/check/gate-openstack-ansible-tests-ansible-func-centos-7/f9fc8c2/logs/ara/reports/index.html
[5]: 
http://logs.openstack.org/63/439963/1/check/gate-kolla-ansible-dsvm-deploy-centos-binary-centos-7-nv/a7243b8/logs/playbook_reports/reports/index.html

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Zuul v3 - What's Coming: What to expect with the Zuul v3 Rollout

2017-03-01 Thread Joshua Hesketh
Thanks for the great write-up Monty :-). Last week was great fun and zuulv3
is making excellent process. I'm excited for the switch-over.

Cheers,
Josh

On Wed, Mar 1, 2017 at 10:26 AM, Monty Taylor  wrote:

> Hi everybody!
>
> This content can also be found at
> http://inaugust.com/posts/whats-coming-zuulv3.html - but I've pasted it
> in here directly because I know that some folks don't like clicking links.
>
> tl;dr - At last week's OpenStack PTG, the OpenStack Infra team ran the
> first Zuul v3 job, so it's time to start getting everybody ready for
> what's coming
>
> **Don't Panic!** Awesome changes are coming, but you are NOT on the hook
> for rewriting all of your project's gate jobs or anything crazy like
> that. Now grab a seat by the fire, pour yourself a drink while I spin a
> yarn about days gone by and days yet to come.
>
> First, some background
>
> The OpenStack Infra team has been hard at work for quite a while on a
> new version of Zuul (where by 'quite some time' I mean that Jim Blair
> and I had our first Zuul v3 design whiteboarding session in 2014). As
> you might be able to guess given the amount of time, there are some big
> things coming that will have a real and visible impact on the OpenStack
> community and beyond. Since we have a running Zuul v3 now [1], it seemed
> like the time to start getting folks up to speed on what to expect.
>
> There is other deep-dive information on architecture and rationale if
> you're interested[2], but for now we'll focus on what's relevant for end
> users. We're also going to start sending out a bi-weekly "Status of Zuul
> v3" email to the openstack-dev@lists.openstack.org mailing list ... so
> stay tuned!
>
> **Important Note** This post includes some code snippets - but v3 is
> still a work in progress. We know of at least one breaking change that
> is coming to the config format, so please treat this not as a tutorial,
> but as a conceptual overview. Syntax is subject to change.
>
> The Big Ticket Items
>
> While there are a bunch of changes behind the scenes, there are a
> reasonably tractable number of user-facing differences.
>
> * Self-testing In-Repo Job Config
> * Ansible Job Content
> * First-class Multi-node Jobs
> * Improved Job Reuse
> * Support for non-OpenStack Code and Node Systems
> * and Much, Much More
>
> Self-testing In-Repo Job Config
>
> This is probably the biggest deal. There are a lot of OpenStack Devs
> (around 2k in Ocata) and a lot of repositories (1689) There a lot fewer
> folks on the project-config-core team who are the ones who review all of
> the job config changes (please everyone thank Andreas Jaeger next time
> you see him). That's not awesome.
>
> Self-testing in-repo job config is awesome.
>
> Many systems out there these days have an in-repo job config system.
> Travis CI has had it since day one, and Jenkins has recently added
> support for a Jenkinsfile inside of git repos. With Zuul v3, we'll have
> it too.
>
> Once we roll out v3 to everyone, as a supplement to jobs defined in our
> central config repositories, each project will be able to add a
> zuul.yaml file to their own repo:
>
>
> - job:
> name: my_awesome_job
> nodes:
>   - name: controller
> label: centos-7
>
> - project:
> name: openstack/awesome_project
> check:
>   jobs:
> - my_awesome_job
>
> It's a small file, but there is a lot going on, so let's unpack it.
>
> First we define a job to run. It's named my_awesome_job and it needs one
> node. That node will be named controller and will be based on the
> centos-7 base node in nodepool.
>
> In the next section, we say that we want to run that job in the check
> pipeline, which in OpenStack is defined as the jobs that run when
> patchsets are proposed.
>
> And it's also self-testing!
>
> Everyone knows the fun game of writing a patch to the test jobs, getting
> it approved, then hoping it works once it starts running. With Zuul v3
> in-repo jobs, if there is a change to job definitions in a proposed
> patch, that patch will be tested with those changes applied. And since
> it's Zuul, Depends-On footers are honored as well - so iteration on
> getting a test job right becomes just like iterating on any other patch
> or sequence of patches.
>
> Ansible Job Content
>
> The job my_awesome_job isn't very useful if it doesn't define any
> content. That's done in the repo as well, in playbooks/my_awesome_job.yaml:
>
>
> - hosts: controller
>   tasks:
> - name: Run make tests
>   shell: make distcheck
>
> As previously mentioned, the job content is now defined in Ansible
> rather than using our Jenkins Job Builder tool. This playbook is going
> to run a tasks on a host called controller which you may remember we
> requested in the job definition. On that host, it will run make
> distcheck. Pretty much anything you can do in Ansible, you can do in a
> Zuul job now, and the playbooks should also be re-usable outside of a
> testing 

[openstack-dev] [all] Cloud-init VM instance not coming up in a multi-node DevStack envionment

2017-03-01 Thread Anil Rao
Hi,

I recently created a multi-node DevStack environment (based on stable/ocata) 
made up of the following nodes:


-1 Controller Node

-1 Network Node

-2 Compute Nodes

All VM instances are only deployed on the 2 compute nodes. Neutron network 
services are provided by the network node.

I am able to create VMs and have them communicate with each other and also with 
external (outside the DevStack environment) endpoints.

However, I find that I am unable to successfully deploy a VM instance that is 
based on cloud-init. As the following console log snippet shows, cloud-init 
running inside a VM instance is unable to get the necessary meta-data and hangs 
during VM instance startup.


91.811361] cloud-init[977]: ci-info: ++Net 
device info+++
[   91.818689] cloud-init[977]: ci-info: 
++--+--+---+---+---+
[   91.825274] cloud-init[977]: ci-info: | Device |  Up  |   Address
|  Mask | Scope | Hw-Address|
[   91.832763] cloud-init[977]: ci-info: 
++--+--+---+---+---+
[   91.839288] cloud-init[977]: ci-info: |   lo   | True |  127.0.0.1   
|   255.0.0.0   |   .   | . |
[   91.857827] cloud-init[977]: ci-info: |   lo   | True |   ::1/128
|   .   |  host | . |
[   91.864806] cloud-init[977]: ci-info: |  eth0  | True | 192.168.1.10 
| 255.255.255.0 |   .   | fa:16:3e:cf:a8:d8 |
[   91.871433] cloud-init[977]: ci-info: |  eth0  | True | 
fe80::f816:3eff:fecf:a8d8/64 |   .   |  link | fa:16:3e:cf:a8:d8 |
[   91.878237] cloud-init[977]: ci-info: 
++--+--+---+---+---+
[   91.896344] cloud-init[977]: ci-info: Route 
IPv4 info
[   91.903652] cloud-init[977]: ci-info: 
+---+-+-+-+---+---+
[   91.912523] cloud-init[977]: ci-info: | Route |   Destination   |   Gateway  
 | Genmask | Interface | Flags |
[   91.930131] cloud-init[977]: ci-info: 
+---+-+-+-+---+---+
[   91.936482] cloud-init[977]: ci-info: |   0   | 0.0.0.0 | 
192.168.1.1 | 0.0.0.0 |eth0   |   UG  |
[   91.942651] cloud-init[977]: ci-info: |   1   | 169.254.169.254 | 
192.168.1.1 | 255.255.255.255 |eth0   |  UGH  |
[   91.948624] cloud-init[977]: ci-info: |   2   |   192.168.1.0   |   0.0.0.0  
 |  255.255.255.0  |eth0   |   U   |
[   91.954798] cloud-init[977]: ci-info: 
+---+-+-+-+---+---+
[   91.961102] cloud-init[977]: 2017-03-01 17:46:38,723 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [0/120s]: bad 
status code [404]
[   92.997374] cloud-init[977]: 2017-03-01 17:46:39,917 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [1/120s]: bad 
status code [404]
[   94.320985] cloud-init[977]: 2017-03-01 17:46:41,240 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [2/120s]: bad 
status code [404]
[   95.480615] cloud-init[977]: 2017-03-01 17:46:42,400 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [3/120s]: bad 
status code [404]
[
...

[  118.589843] cloud-init[977]: 2017-03-01 17:47:05,509 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [27/120s]: bad 
status code [404]
[  121.796946] cloud-init[977]: 2017-03-01 17:47:08,716 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [30/120s]: bad 
status code [404]
[  124.918111] cloud-init[977]: 2017-03-01 17:47:11,837 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [33/120s]: bad 
status code [404]
[  129.195778] cloud-init[977]: 2017-03-01 17:47:16,115 - 
url_helper.py[WARNING]: Calling 
'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [37/120s]: bad 
status code [404]


I am not sure what needs to be done to ensure that the metadata endpoint is 
accessible from inside the VM instance and was looking for some assistance. Any 
pointer and/or suggestions would be most appreciated.

Kind regards,
Anil
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon][stable][requirements] Modifying global-requirements to cap xstatic package versions

2017-03-01 Thread Richard Jones
Hi folks,

We've run into some issues with various folks installing Horizon and
its dependencies using just requirements.txt which doesn't limit the
versions of xstatic packages beyond some minimum version. This is a
particular problem for releases prior to Ocata since those are not
compatible with the latest versions of some of the xstatic packages.
So, we believe what's necessary is to:

1. Update current global-requirements.txt to pin the current released
version of each xstatic package. We don't update xstatic packages very
often, so keeping g-r in lock-step with upper-constraints.txt is
reasonable, I think.
2. Update stable versions of global-requirements.txt to restrict them
to the versions we know are compatible based on the versions in
upper-constraints for the particular stable release.


Thoughts?

 Richard

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][docs] Is anyone interested in being the docs liaison for Nova?

2017-03-01 Thread Zhenyu Zheng
Hi,

I'm not a native English speaker but I would like to have a try if possible
:)

On Wed, Mar 1, 2017 at 11:45 PM, Matt Riedemann  wrote:

> There is a need for a liaison from Nova for the docs team to help with
> compute-specific docs in the install guide and various manuals.
>
> For example, we documented placement and cells v2 in the nova devref in
> Ocata but instructions on those aren't in the install guide, so the docs
> team is adding that here [1].
>
> I'm not entirely sure what the docs liaison role consists of, but I assume
> it at least means attending docs meetings, helping to review docs patches
> that are related to nova, helping to alert the docs team of big changes
> coming in a release that will impact the install guide, etc.
>
> From my point of view, I've historically pushed nova developers to be
> documenting new features within the nova devref since it was "closer to
> home" and could be tied to landing said feature in the nova tree, so there
> was more oversight on the docs actually happening *somewhere* rather than a
> promise to work them in the non-nova manuals, which a lot of the time was
> lip service and didn't actually happen once the feature was in. But there
> is still the need for the install guide as the first step to deploying nova
> so we need to balance both things.
>
> If no one else steps up for the docs liaison role, by default it lands on
> me, so I'd appreciate any help here.
>
> [1] https://review.openstack.org/#/c/438328/
>
> --
>
> Thanks,
>
> Matt Riedemann
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] 2nd day recording for the Virtual Team Gathering

2017-03-01 Thread Antoni Segura Puimedon
Hi Kuryrs!

Thanks again for joining the Virtual team Gathering sessions. Here you
have the links to the recordings:

https://youtu.be/o1RKNOAhqho

https://youtu.be/ovbK5kk5AZ0

See you today on the last day of this first VTG!

Toni

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] state of the stable/mitaka branches

2017-03-01 Thread Tony Breeds
On Wed, Mar 01, 2017 at 09:29:14PM +, Jeremy Stanley wrote:
> On 2017-03-01 13:24:09 -0800 (-0800), Ihar Hrachyshka wrote:
> [...]
> > Other projects spent some time upfront and adopted constraints
> > quite a while ago. I am surprised that there are still stable
> > branches that don't do that.
> [...]
> 
> Yep, I had to backport it for some oslo.middleware stable branches
> recently so we could get a security fix through. There are likely
> some still lurking out there we just haven't spotted because they
> receive new changes on those branches infrequently (or never).

So I did a quick grep and this is what I get:

BRANCH   ALLMERGED  OPEN  HELP  SKIP
origin/master252   23115 2 4
origin/stable/ocata  202   184 014 4
origin/stable/newton 193   104 085 4
origin/stable/mitaka 16871 093 4

This is based on repos that have opted into being managed by the thr
requirements team.

This shows that of the 252 repos that have an origin/master branch 231 have
merged constraints support, 15 have open reviews to do so 2 need some form of
help becuase they shoudl support constratints but they're difficult and 4
shoudln't support constraints.

The 4 projects in SKIP are:
openstack/requirements  
SKIP  !
openstack/tacker-horizon
SKIP  !
openstack/tempest   
SKIP  !
openstack/tempest-lib   
SKIP  !

and there's certainly scope to re-evaluate that.  I suspect that
openstack/tacker-horizon should really be moved to the help section but this
was a quick "see how we're traveling" script.

You can see that the number of projects that need "HELP" gets higher as we get
to older releases.  For most of them it shoudl be a simple matter of
cherry-picking the patch on the nearest branch and then just using the correct
branch to thet the file form the requirements repo.

So clearly there's scope for projects teams and the requirements team to do
work here, but right now it isn't on the plan for this cycle.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] state of the stable/mitaka branches

2017-03-01 Thread Tony Breeds
On Wed, Mar 01, 2017 at 01:24:09PM -0800, Ihar Hrachyshka wrote:
> I am surprised that there are still
> stable branches that don't do that. It's so much easier to maintain
> them with constraints in place!

Taking a tanget.  At the begining of the Ocata cycle we only had 55%
constraints usage on *master*  That's up much closer to 90% these days.

So master and ocata are looking better but there is still plenty of backport
potential to newton (and possibly mitaka).

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cross-project][nova][cinder][designate][neutron] Common support-matrix.py

2017-03-01 Thread Mike Perez
Hey all,

I kicked off a thread [1] to start talking about approaches with improving
vendor discoveribility by improving our Market Place [2]. In order to improve
our market place, having the projects be more part of the process would allow
the information to be more accurate of what vendors have good support in the
respected service.

It was discovered that we have a common solution of using INI files and parsing
that with a common support-matrix.py script that originated out of nova [3].
I would like to propose we push this into some common sphinx extension project.
Are there any suggestions of where that could live?

I've look at how Nova [3][4], Neutron [5][6] and Designate [7][8] are doing
this today. Nova and Neutron are pretty close, and Designate is a much more
simplified version. Cinder [9][10] is not using INI files, but instead going
off the driver classes themselves. Are there any other projects I'm missing?

Cinder and Designate have drivers per row, as oppose to Nova and Neutron
which have features per row. This makes sense given the difference in drivers
versus features?

I'm assuming the Designate matrix is saying every driver supports every feature
in its API? If so, that's so awesome and makes me happy.

I would like to start brainstorming on how we can converge on a common matrix
table design so things are a bit more consistent and easier for a common
parsing tool.


[1] - 
http://lists.openstack.org/pipermail/openstack-dev/2017-January/110151.html
[2] - https://www.openstack.org/marketplace/drivers/
[3] - 
https://docs.openstack.org/developer/nova/support-matrix.html#operation_maintenance_mode
[4] - 
http://git.openstack.org/cgit/openstack/nova/tree/doc/ext/support_matrix.py
[5] - https://review.openstack.org/#/c/318192/76
[6] - 
http://docs-draft.openstack.org/92/318192/76/check/gate-neutron-docs-ubuntu-xenial/48cdeb7//doc/build/html/feature_classification/general_feature_support_matrix.html
[7] - 
https://git.openstack.org/cgit/openstack/designate/tree/doc/ext/support_matrix.py
[8] - https://docs.openstack.org/developer/designate/support-matrix.html
[9] - https://review.openstack.org/#/c/371169/15
[10] - 
http://docs-draft.openstack.org/69/371169/15/check/gate-cinder-docs-ubuntu-xenial/aa1bdb1//doc/build/html/driver_support_matrix.html

-- 
Mike Perez


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][release][ptl] Adding docs to the release schedule

2017-03-01 Thread John Dickinson


On 1 Mar 2017, at 10:07, Alexandra Settle wrote:

> On 3/1/17, 5:58 PM, "John Dickinson"  wrote:
>
>
>
> On 1 Mar 2017, at 9:52, Alexandra Settle wrote:
>
> > Hi everyone,
> >
> > I would like to propose that we introduce a “Review documentation” 
> period on the release schedule.
> >
> > We would formulate it as a deadline, so that it fits in the schedule 
> and making it coincide with the RC1 deadline.
> >
> > For projects that are not following the milestones, we would translate 
> this new inclusion literally, so if you would like your project to be 
> documented at docs.o.o, then doc must be introduced and reviewed one month 
> before the branch is cut.
>
> Which docs are these? There are several different sets of docs that are 
> hosted on docs.o.o that are managed within a project repo. Are you saying 
> those won't get pushed to
> docs.o.o if they are patched within a month of the cycle release?
>
> The only sets of docs that are published on the docs.o.o site that are 
> managed in project-specific repos is the project-specific installation 
> guides. That management is entirely up to the team themselves, but I would 
> like to push for the integration of a “documentation review” period to ensure 
> that those teams are reviewing their docs in their own tree.
>
> This is a preferential suggestion, not a demand. I cannot make you review 
> your documentation at any given period.
>
> The ‘month before’ that I refer to would be for introduction of documentation 
> and a review period. I will not stop any documentation being pushed to the 
> repo unless, of course, it is untested and breaks the installation process.

There's the dev docs, the install guide, and the api reference. Each of these 
are published at docs.o.o, and each have elements that need to be up-to-date 
with a release.

>
>
> >
> > In the last week since we released Ocata, it has become increasingly 
> apparent that the documentation was not updated from the development side. We 
> were not aware of a lot of new enhancements, features, or major bug fixes for 
> certain projects. This means we have released with incorrect/out-of-date 
> documentation. This is not only an unfortunately bad reflection on our team, 
> but on the project teams themselves.
> >
> > The new inclusion to the schedule may seem unnecessary, but a lot of 
> people rely on this and the PTL drives milestones from this schedule.
> >
> > From our side, I endeavor to ensure our release managers are working 
> harder to ping and remind doc liaisons and PTLs to ensure the documentation 
> is appropriately updated and working to ensure this does not happen in the 
> future.

Overall, I really like the general concept here. It's very important to have 
good docs. Good docs start with the patch, and we should be encouraging the 
idea of "patch must have both tests and docs before landing".

On a personal note, though, I think I'll find this pretty tough. First, it's 
really hard for me to define when docs are "done", so it's hard to know that 
the docs are "right" at the time of release. Second, docs are built and 
published at each commit, so updating the docs "later, in a follow-on patch" is 
a simple thing to hope for and gives fast feedback, even after a release. (Of 
course the challenge is actually *doing* the patch later--see my previous 
paragraph.)

> >
> > Thanks,
> >
> > Alex
>
>
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Pike PTG recap - API

2017-03-01 Thread Matt Riedemann
On Thursday afternoon at the PTG we talked about various API-related 
topics. The full etherpad is here [1]. These are the highlights.


Policy
--

There was a separate etherpad for this [2]. We talked through a few 
proposals that John Garbutt has for dealing with policy in nova. One was 
for simply documenting the various policies so that you get a 
description of the policy and what API path(s) it touches. There are a 
couple of other specs in [2] dealing with fixing the misuse of the 
admin_or_owner role and another spec proposing to add more granular 
roles and make them the defaults. During the session there was mostly 
some discussion about the ideas and those will be fed back into the 
specs. If you're an operator that hates dealing with policy, you should 
read those specs.


Deploy nova-api under WSGI
--

It's been possible to run nova-api under WSGI for several releases now 
but it's always been considered experimental by the Nova team and in 
Ocata we found out that there is actually a pretty serious bug [3] when 
running that way which affects how Nova deals with rolling upgrades. 
Since running the APIs under WSGI is a cross-project goal for Pike, we 
talked a bit about the plan to make this happen, which involves fixing 
that bug and deploying nova-api under httpd in the 
gate-tempest-dsvm-neutron-nova-next-full-ubuntu-xenial-nv job. EmilienM 
was already looking at making nova-api run under apache in devstack. 
There were also some wrinkles that have to be sorted out when fixing the 
bug, and sdague and cdent said they'd work on fixing that. Finally we 
talked about when we can drop support for running nova-api with 
eventlet, and thought that if we can get the wsgi support in early 
enough in Pike, we can deprecate running nova-api under eventlet in Pike 
and plan to remove that support in the R release.


Deprecating file injection
--

We talked about an older thread I brought up in Ocata [4] related to the 
issues with "personality" files in the compute API and agreed to:


(1) Drop the VFSLocalFS fallback for libvirt when guestfs isn't 
available. There will not be a deprecation period for this as it was a 
hack for older ubuntu versions anyway and is a known security issue. 
Sean Dague is working on this.
(2) Deprecate personality files from the compute API with a 
microversion. We'll continue to honor personality files in the API 
before the deprecation microversion. I'll be working on this.


We also made a note to reach out to people that were interested in using 
dynamic vendordata [5]. I did that and the feedback has been positive, 
which further supports deprecating file injection and hooks.


Return instance flavor in server body
-

We talked about a spec [6] which proposes to return the flavor 
information used to create a server as part of the server GET response 
body, since the original flavor used to create the instance could have 
been deleted or changed so getting that information via the flavors API 
later may not be accurate. There was some debate about the flavor info 
to return (like id or name) which was settled, at least for now, and 
those updates are being made in the spec which Chris Friesen is now driving.


API improvements for cold migration
---

Takashi Natsume brought up some previously approved specs related to 
being able to list/show cold migrations for a server, force the host for 
a cold migration and abort an in-progress cold migration. There was 
general agreement on doing these but the abort cold migration spec 
needed to be updated a bit to be less libvirt-specific since we expect 
other virt drivers should be able to implement that one.


Add scheduler hints to the server GET response
--

This was proposed by Balaizs Gibizer (gibi) and is the same idea as the 
flavors one above. It would be useful to expose the original scheduler 
hints used when creating a server so that another server could be 
created with the same scheduler hints. The action on this item was gibi 
needs to write a spec but there was general agreement it seemed OK.


Service-locked servers
--

This was a short discussion about a feature that projects that use 
service instances, like Trove, would like to have, where the service 
(trove in this case) has a lock on a server in Nova and only the service 
(or the admin) can perform actions on the server. John Garbutt was going 
to write up a nova spec for this, and Amrith Kumar was going to write up 
a Trove spec for using the Nova feature.


[1] https://etherpad.openstack.org/p/nova-ptg-pike-api
[2] https://etherpad.openstack.org/p/pike-ptg-keystone-policy
[3] https://bugs.launchpad.net/nova/+bug/1661360
[4] 
http://lists.openstack.org/pipermail/openstack-dev/2016-November/107195.html

[5] https://docs.openstack.org/developer/nova/vendordata.html
[6] 

Re: [openstack-dev] Zuul v3 - What's Coming: What to expect with the Zuul v3 Rollout

2017-03-01 Thread Emilien Macchi
On Wed, Mar 1, 2017 at 12:31 PM, Kosnik, Lubosz  wrote:
> So did I understand that properly. There will be possibility to create real
> multi-node tests like with 3-4 nodes?

You can already do that, openstack-infra/nodepool proposes some
2-nodes, 3-nodes and even 4-nodes jobs. See jenkins/jobs/projects.yaml
to see which projects are using it.

> Cheers,
> Lubosz
>
> On Feb 28, 2017, at 7:13 PM, joehuang  wrote:
>
> So cool! Look forward to multi-node jobs as first class
>
> Best Regards
> Chaoyi Huang (joehuang)
>
> 
> From: Monty Taylor [mord...@inaugust.com]
> Sent: 01 March 2017 7:26
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] Zuul v3 - What's Coming: What to expect with the
> Zuul v3 Rollout
>
> Hi everybody!
>
> This content can also be found at
> http://inaugust.com/posts/whats-coming-zuulv3.html - but I've pasted it
> in here directly because I know that some folks don't like clicking links.
>
> tl;dr - At last week's OpenStack PTG, the OpenStack Infra team ran the
> first Zuul v3 job, so it's time to start getting everybody ready for
> what's coming
>
> **Don't Panic!** Awesome changes are coming, but you are NOT on the hook
> for rewriting all of your project's gate jobs or anything crazy like
> that. Now grab a seat by the fire, pour yourself a drink while I spin a
> yarn about days gone by and days yet to come.
>
> First, some background
>
> The OpenStack Infra team has been hard at work for quite a while on a
> new version of Zuul (where by 'quite some time' I mean that Jim Blair
> and I had our first Zuul v3 design whiteboarding session in 2014). As
> you might be able to guess given the amount of time, there are some big
> things coming that will have a real and visible impact on the OpenStack
> community and beyond. Since we have a running Zuul v3 now [1], it seemed
> like the time to start getting folks up to speed on what to expect.
>
> There is other deep-dive information on architecture and rationale if
> you're interested[2], but for now we'll focus on what's relevant for end
> users. We're also going to start sending out a bi-weekly "Status of Zuul
> v3" email to the openstack-dev@lists.openstack.org mailing list ... so
> stay tuned!
>
> **Important Note** This post includes some code snippets - but v3 is
> still a work in progress. We know of at least one breaking change that
> is coming to the config format, so please treat this not as a tutorial,
> but as a conceptual overview. Syntax is subject to change.
>
> The Big Ticket Items
>
> While there are a bunch of changes behind the scenes, there are a
> reasonably tractable number of user-facing differences.
>
> * Self-testing In-Repo Job Config
> * Ansible Job Content
> * First-class Multi-node Jobs
> * Improved Job Reuse
> * Support for non-OpenStack Code and Node Systems
> * and Much, Much More
>
> Self-testing In-Repo Job Config
>
> This is probably the biggest deal. There are a lot of OpenStack Devs
> (around 2k in Ocata) and a lot of repositories (1689) There a lot fewer
> folks on the project-config-core team who are the ones who review all of
> the job config changes (please everyone thank Andreas Jaeger next time
> you see him). That's not awesome.
>
> Self-testing in-repo job config is awesome.
>
> Many systems out there these days have an in-repo job config system.
> Travis CI has had it since day one, and Jenkins has recently added
> support for a Jenkinsfile inside of git repos. With Zuul v3, we'll have
> it too.
>
> Once we roll out v3 to everyone, as a supplement to jobs defined in our
> central config repositories, each project will be able to add a
> zuul.yaml file to their own repo:
>
>
> - job:
>name: my_awesome_job
>nodes:
>  - name: controller
>label: centos-7
>
> - project:
>name: openstack/awesome_project
>check:
>  jobs:
>- my_awesome_job
>
> It's a small file, but there is a lot going on, so let's unpack it.
>
> First we define a job to run. It's named my_awesome_job and it needs one
> node. That node will be named controller and will be based on the
> centos-7 base node in nodepool.
>
> In the next section, we say that we want to run that job in the check
> pipeline, which in OpenStack is defined as the jobs that run when
> patchsets are proposed.
>
> And it's also self-testing!
>
> Everyone knows the fun game of writing a patch to the test jobs, getting
> it approved, then hoping it works once it starts running. With Zuul v3
> in-repo jobs, if there is a change to job definitions in a proposed
> patch, that patch will be tested with those changes applied. And since
> it's Zuul, Depends-On footers are honored as well - so iteration on
> getting a test job right becomes just like iterating on any other patch
> or sequence of patches.
>
> Ansible Job Content
>
> The job my_awesome_job isn't very useful if it doesn't define any
> 

Re: [openstack-dev] [keystone][defcore][refstack] Removal of the v2.0 API

2017-03-01 Thread Rodrigo Duarte
On Wed, Mar 1, 2017 at 7:10 PM, Lance Bragstad  wrote:

> During the PTG, Morgan mentioned that there was the possibility of
> keystone removing the v2.0 API [0]. This thread is a follow up from that
> discussion to make sure we loop in the right people and do everything by
> the books.
>
> The result of the session [1] listed the following work items:
> - Figure out how we can test the removal and make the job voting (does the
> v3-only job count for this)?
>

We have two v3-only jobs, one only runs keystone's tempest plugin tests -
which are specific to federation (it configures a federated environment
using mod_shib) - and another one (non-voting) that runs tempest, I believe
the later can be a good way to initially validate the v2.0 removal.


> - Reach out to defcore and refstack communities about removing v2.0 (which
> is partially what this thread is doing)
>
> Outside of this thread, what else do we have to do from a defcore
> perspective to make this happen?
>
> Thanks for the time!
>
> [0] https://review.openstack.org/#/c/437667/
> [1] https://etherpad.openstack.org/p/pike-ptg-keystone-deprecations
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Rodrigo Duarte Sousa
Senior Quality Engineer @ Red Hat
MSc in Computer Science
http://rodrigods.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Tap-as-a-Service] Problem in visualizing port-mirroring information

2017-03-01 Thread Anil Rao
Hi Manik,

If the br-tap bridge is not present it is an indication that the TaaS Agent is 
not running. For a single-node setup, you will need to add both of the 
following lines:

  enable_service taas
 enable_service taas_openvswitch_agent

to the local.conf file.

You can follow the instructions in the DevStack section of the TaaS GIT repo.

Regards,
Anil

From: Manik Bindlish [mailto:manik.bindl...@nectechnologies.in]
Sent: Wednesday, March 01, 2017 2:58 AM
To: OpenStack Development Mailing List (not for usage questions); 
yamam...@midokura.com
Subject: Re: [openstack-dev] [Tap-as-a-Service] Problem in visualizing 
port-mirroring information

Hi All,
I verified and found out that with my devstack setup ( on Master) , I could not 
see the br-tap ( as shown in the image attached) on my All-In-One deployment.

Please let me know if this is an issue, or is this expected. If it is expected, 
then how can we monitor the mirrored packets without br-tap on OVS.

Regards,
Manik

From: Manik Bindlish [mailto:manik.bindl...@nectechnologies.in]
Sent: Tuesday, February 28, 2017 5:07 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Problem in visualizing port-mirroring information


Hi,



>  have you tried to disable port security?

I have disabled security group from both the ports. After disabling I am not 
able to ping on the instances.



===



neutron CLI is deprecated and will be removed in the future. Use openstack CLI 
instead.

+---+-+

| Field | Value 
  |

+---+-+

| admin_state_up| True  
  |

| allowed_address_pairs |   
  |

| binding:host_id   | openstack-nti-11  
  |

| binding:profile   | {}
  |

| binding:vif_details   | {"port_filter": true, "ovs_hybrid_plug": true}
  |

| binding:vif_type  | ovs   
  |

| binding:vnic_type | normal
  |

| created_at| 2017-02-27T10:13:51Z  
  |

| description   |   
  |

| device_id | 1582911f-ed59-43d6-9711-0e505db5413b  
  |

| device_owner  | compute:None  
  |

| extra_dhcp_opts   |   
  |

| fixed_ips | {"subnet_id": "398b77d2-49aa-4d29-ac4d-f0fd14943120", 
"ip_address": "10.0.0.12"}|

|   | {"subnet_id": "401e3dde-1504-4620-8bbc-0ad6e54b0570", 
"ip_address": "fdaa:99df:2497:0:f816:3eff:fe58:4d36"} |

| id| dde6689e-4f0f-4fe2-9d28-1182ef03e113  
  |

| mac_address   | fa:16:3e:58:4d:36 
  |

| name  |   
  |

| network_id| 7fae0053-aeeb-44ac-9336-41b9cae8a9ba  
  |

| port_security_enabled | True  
  |

| project_id| 9c342a60b7cd4e8cacc8c5f6f92e80e6  
  |

| revision_number   | 13
  |

| security_groups   |   
   

Re: [openstack-dev] [all] PBR 2.0.0 release *may* cause gate failures

2017-03-01 Thread Matt Riedemann

On 3/1/2017 8:13 AM, Doug Hellmann wrote:

Excerpts from Sean Dague's message of 2017-03-01 07:57:31 -0500:

On 03/01/2017 12:26 AM, Tony Breeds wrote:

Hi All,
Earlier today the release team tagged PBR 2.0.0.  The reason for the major
version bump is because warnerrors has been removed in favor of
warning-is-error from sphinx >= 1.5.0.

It seems that several projects outside both inside and outside OpenStack have
capped pbr <2.0.0 so we can't actually use this release yet.  The requirements
team will work with all projects to remove the cap of pbr in those projects.

The good news is that projects using upper-constraints.txt are insulated from
this and shouldn't be affected[1].  However upper-constraints.txt isn't being 
used
by all projects and *those* projects will start seeing

ContextualVersionConflicts: (pbr 2.0.0 in gate logs.  It's recommended that
those projects add a local ban for pbr and associate it with:
https://bugs.launchpad.net/openstack-requirements/+bug/1668848

Then once the situation is resolved we can unwind and remove the temporary caps.

Yours Tony.

[1] There is at least 1 corner case where the coverage job installed directly
from a git URL and therefore that wasn't protected.


So, I feel like we hit a similar issue around Vancouver with a pbr bump.
Can we stop capping pbr per rule now?


Tony identified caps in 5 OpenStack community projects (see [1]) as well
as powervm and python-jsonpath-rw-ext. Pull requests to those other
projects are linked from the bug [2].

The sqlalchemy-migrate 0.11.0 release should fix that library. The
release team will prioritize releases for the other dependencies today
as they come in.

Doug

[1] https://review.openstack.org/#/q/topic:bug/1668848
[2] https://bugs.launchpad.net/openstack-requirements/+bug/1668848

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



nova-specs was also broken, the fix is here for anyone that needs to 
recheck or rebase nova-specs changes:


https://review.openstack.org/#/c/439878/

--

Thanks,

Matt Riedemann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][defcore][refstack] Removal of the v2.0 API

2017-03-01 Thread Lance Bragstad
During the PTG, Morgan mentioned that there was the possibility of keystone
removing the v2.0 API [0]. This thread is a follow up from that discussion
to make sure we loop in the right people and do everything by the books.

The result of the session [1] listed the following work items:
- Figure out how we can test the removal and make the job voting (does the
v3-only job count for this)?
- Reach out to defcore and refstack communities about removing v2.0 (which
is partially what this thread is doing)

Outside of this thread, what else do we have to do from a defcore
perspective to make this happen?

Thanks for the time!

[0] https://review.openstack.org/#/c/437667/
[1] https://etherpad.openstack.org/p/pike-ptg-keystone-deprecations
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] state of the stable/mitaka branches

2017-03-01 Thread Jeremy Stanley
On 2017-03-01 13:24:09 -0800 (-0800), Ihar Hrachyshka wrote:
[...]
> Other projects spent some time upfront and adopted constraints
> quite a while ago. I am surprised that there are still stable
> branches that don't do that.
[...]

Yep, I had to backport it for some oslo.middleware stable branches
recently so we could get a security fix through. There are likely
some still lurking out there we just haven't spotted because they
receive new changes on those branches infrequently (or never).
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] state of the stable/mitaka branches

2017-03-01 Thread Ihar Hrachyshka
Agreed. All I am saying is that as long as there was no change in the
policy, projects are expected to keep up.

I see that upper-constraints.txt mentioned in the email several times.
I believe it's the least that the project could do to fix the branch,
and lack of the fix doesn't seem like a good enough reason to drop the
ball in the middle (for the project as a whole, not for any specific
contributor). Other projects spent some time upfront and adopted
constraints quite a while ago. I am surprised that there are still
stable branches that don't do that. It's so much easier to maintain
them with constraints in place!

Ihar

On Wed, Mar 1, 2017 at 12:34 PM, Jeremy Stanley  wrote:
> On 2017-03-01 11:36:47 -0800 (-0800), Ihar Hrachyshka wrote:
>> On Wed, Mar 1, 2017 at 11:15 AM, Pavlo Shchelokovskyy
>>  wrote:
>> > With all the above, the question is should we really fix the gates for the
>> > mitaka branch now? According to OpenStack release page [1] the Mitaka
>> > release will reach end-of-life on April 10, 2017.
>>
>> Yes we should. It's part of the contract with consumers that rely on
>> follows:stable-policy tag owned by Ironic and other projects.
>
> It's a two-way street though. As a community we agreed to extend
> stable support timelines on the promise that the people consuming
> those would step up to keep them testable. If key projects are
> having trouble testing stable/mitaka now and the people relying on
> that aren't helping fix the situation, then it's time to again
> reevaluate our earlier choices for support duration.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment][TripleO][kolla][ansible][fuel] Next steps for cross project collaboration

2017-03-01 Thread Michał Jastrzębski
We can increase cadence as needed (for example closer to release when
we all have to deal with project changes). Also some ptg/forum
scheduling for sessions like that (so they won't be taken from our
projects usual track. Thank you TripleO community for your precious
session time!).

On 1 March 2017 at 07:19, Andy McCrae  wrote:
> On 28 February 2017 at 08:25, Flavio Percoco  wrote:
>>
>> On 28/02/17 08:01 +, Jesse Pretorius wrote:
>>>
>>> On 2/28/17, 12:52 AM, "Michał Jastrzębski"  wrote:
>>>
>>>I think instead of adding yet-another-irc-channel how about create
>>>weekly meetings? We can rant in scheduled time and it probably will
>>>get more attention
>>>
>>> Happy to meet, in fact I think it’ll be important for keeping things on
>>> track – however weekly is too often. I think once a month at most is
>>> perfectly fine.
>>
>>
>> Yes, monthly prolly better than weekly in this cas (if we ever decide to
>> have these meetings).
>
>
> Agreed - monthly sounds like a good start, we can always see how it goes and
> change up as required.
> I think if the only change is that we have a WG added to the Wiki and a
> [deployments] tag added for the ML,
> then we can't really expect to change much or have a large impact.
>
> I'd love to see this build some momentum and come up with useful outcomes,
> which I think the PTG session
> started really nicely. There are clearly quite a few common issues that we
> can address better as a collective.
>
> Andy
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Monitoring script framework PoC

2017-03-01 Thread Carter, Kevin
I think this is a great idea. It's fairly straight forward to
contribute to, and with the aim to support multiple output formats I
can see this benefiting lots of folks without being bound to a single
system.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Boot from Volume meeting?

2017-03-01 Thread Julia Kreger
On Wed, Mar 1, 2017 at 3:10 PM, Jim Rollenhagen  wrote:
> On Wed, Mar 1, 2017 at 9:07 AM, Dmitry Tantsur  wrote:
[trim]
>> Speaking of which, I propose the networking subteam to start (continue?)
>> having their own meeting as well. There is a lot of stuff going on that is
>> hard to catch up with.
>
>
> We stopped that meeting because it had turned into a status update. We just
> rolled it back up into the usual subteam updates.
>
> I'd like to leave it to the folks working on that to decide if they need a
> meeting to keep up with everything.

In retrospect, it feels like the networking integration work will be
something that we will never completely close out as being a topic of
regular discussion.  Part of me wishes we could efficiently share a
larger flex time block, but I feel that it  would quickly become a bit
of a logistical headache at the beginning of a cycle.

That being said, I'll propose a new meeting/meeting time.
Accordingly, I've created a doodle poll [0], with options leaning
toward Thursday/Friday.

-Julia

[0] http://doodle.com/poll/qwhnpqazmf7fn5ik

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Proposing Michal Gershenzon to the core team

2017-03-01 Thread Lingxian Kong
+1, she indeed has been doing great contribution to Mistral, welcome,
Michal :-)


Cheers,
Lingxian Kong (Larry)

On Thu, Mar 2, 2017 at 5:47 AM, Renat Akhmerov 
wrote:

> Hi,
>
> Based on the stats of Michal Gershenzon in Ocata cycle I’d like to promote
> her to the core team.
> Michal works at Nokia CloudBand and being a CloudBand engineer she knows
> Mistral very well
> as a user and behind the scenes helped find a lot of bugs and make
> countless number of
> improvements, especially in performance.
>
> Overall, she is a deep thinker, cares about details, always has an unusual
> angle of view on any
> technical problem. She is one of a few people that I’m aware of who I
> could call a Mistral expert.
> She also participates in almost every community meeting in IRC.
>
> In Ocata she improved her statistics pretty significantly (e.g. ~60
> reviews although the cycle was
> very short) and is keeping up the good pace now. Also, Michal is
> officially planning to allocate
> more time for upstream development in Pike
>
> I believe Michal would be a great addition for the Mistral core team.
>
> Please let me know if you agree with that.
>
> Thanks
>
> [1] http://stackalytics.com/?module=mistral-group=
> ocata_id=michal-gershenzon
>
> Renat Akhmerov
> @Nokia
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] state of the stable/mitaka branches

2017-03-01 Thread Jeremy Stanley
On 2017-03-01 11:36:47 -0800 (-0800), Ihar Hrachyshka wrote:
> On Wed, Mar 1, 2017 at 11:15 AM, Pavlo Shchelokovskyy
>  wrote:
> > With all the above, the question is should we really fix the gates for the
> > mitaka branch now? According to OpenStack release page [1] the Mitaka
> > release will reach end-of-life on April 10, 2017.
> 
> Yes we should. It's part of the contract with consumers that rely on
> follows:stable-policy tag owned by Ironic and other projects.

It's a two-way street though. As a community we agreed to extend
stable support timelines on the promise that the people consuming
those would step up to keep them testable. If key projects are
having trouble testing stable/mitaka now and the people relying on
that aren't helping fix the situation, then it's time to again
reevaluate our earlier choices for support duration.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Boot from Volume meeting?

2017-03-01 Thread Jim Rollenhagen
On Wed, Mar 1, 2017 at 9:07 AM, Dmitry Tantsur  wrote:

> Thanks for writing this!
>
> On 02/28/2017 03:42 PM, Julia Kreger wrote:
>
>> Greetings fellow ironic humanoids!
>>
>> As many have known, I've been largely attempting to drive Boot from
>> Volume functionality in ironic over the past two years.  Largely, in a
>> slow incremental approach, which is in part due to how I perceived it
>> to best fit into the existing priorities when the discussions started.
>>
>> During PTG there was quite the interest by multiple people to become
>> involved and attempt to further Boot from Volume forward this cycle. I
>> would like to move to having a weekly meeting with the focus of
>> integrating this functionality into ironic, much like we did with the
>> tighter neutron integration.
>>
>
> +1
>
>
>> I have two reasons for proposing a new meeting:
>>
>> * Detailed technical status updates and planning/co-ordination would
>> need to take place. This would functionally be noise to a large number
>> of contributors in the ironic community.
>>
>> * Many of these details would need need to be worked out prior to the
>> first part of the existing ironic meeting for the weekly status
>> update. The update being a summary of the status of each sub team.
>>
>> With that having been said, I'm curious if we could re-use the
>> ironic-neutron meeting time slot [0] for this effort.  That meeting
>> was cancelled just after the first of this year [1].  In it's place I
>> think we should have a general purpose integration meeting, that could
>> be used as a standing meeting, specifically reserved at this time for
>> Boot from Volume work, but could be also by any integration effort
>> that needs time to sync-up in advance of the existing meeting.
>>
>
> I'm in favor of a generic meeting. I'm not sure it's worth taking the
> whole 1 hour slot though. I think that the BFV one might only take half of
> slot, where the second half maybe taken by another integration subteam.
>
> Speaking of which, I propose the networking subteam to start (continue?)
> having their own meeting as well. There is a lot of stuff going on that is
> hard to catch up with.
>

We stopped that meeting because it had turned into a status update. We just
rolled it back up into the usual subteam updates.

I'd like to leave it to the folks working on that to decide if they need a
meeting to keep up with everything.

// jim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PBR 2.0.0 release *may* cause gate failures

2017-03-01 Thread Tim Burke
On Mar 1, 2017, at 4:57 AM, Sean Dague  wrote:

> I also wonder if we can grant the release team +2 permissions on
> everything in OpenStack so that fixes like this can be gotten in quickly
> without having to go chase a bunch of teams.

I would be very much opposed to this. Isn't this why we have cross-project 
liaisons, defaulting back to the PTLs? Ultimately, project teams need to own 
the change and its repercussions. As we've seen with things like eventlet, 
requirements changes have ripple effects, and the project teams are best-suited 
to foresee those consequences within their own domain.

Tim__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] state of the stable/mitaka branches

2017-03-01 Thread Ihar Hrachyshka
On Wed, Mar 1, 2017 at 11:15 AM, Pavlo Shchelokovskyy
 wrote:
> With all the above, the question is should we really fix the gates for the
> mitaka branch now? According to OpenStack release page [1] the Mitaka
> release will reach end-of-life on April 10, 2017.

Yes we should. It's part of the contract with consumers that rely on
follows:stable-policy tag owned by Ironic and other projects.

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PBR 2.0.0 release *may* cause gate failures

2017-03-01 Thread Davanum Srinivas
On Wed, Mar 1, 2017 at 2:12 PM, Doug Hellmann  wrote:
> Excerpts from Doug Hellmann's message of 2017-03-01 14:04:19 -0500:
>> Excerpts from Andreas Jaeger's message of 2017-03-01 19:50:51 +0100:
>> > On 2017-03-01 17:13, Doug Hellmann  wrote:
>> > > Excerpts from Andreas Jaeger's message of 2017-03-01 16:22:24 +0100:
>> > >> On 2017-03-01 06:26, Tony Breeds  wrote:
>> > >>> Hi All,
>> > >>> Earlier today the release team tagged PBR 2.0.0.  The reason for 
>> > >>> the major
>> > >>> version bump is because warnerrors has been removed in favor of
>> > >>> warning-is-error from sphinx >= 1.5.0.
>> > >>
>> > >> Can we change the sphinx==1.3.6 line in upper-constraints now?
>> > >
>> > > We're currently running into failures updating requirements because of
>> > > the pbr cap in several libraries.
>> > >
>> > > I see requestsexceptions, sqlalchemy-migrate, yaql, fairy-slipper,
>> > > pypowervm, gnocchiclient, reno, and aodhclient mentioned in
>> > > http://logs.openstack.org/88/439588/2/check/gate-requirements-tox-py27-check-uc-ubuntu-xenial/97625f3/console.html
>> > > for example.
>> > >
>> > > I'm working on reno right now.
>> >
>> > thanks
>> >
>> > fairy-slipper is retired, let's remove it -
>> > https://review.openstack.org/439769
>> >
>> > Andreas
>>
>> Based on the nature of the failures, I think we're going to have to get
>> releases of all of the other projects up, and then prepare one patch in
>> the requirements repo to update constraints all at one time.
>>
>> Doug
>
> Most of the projects involved are trying to take action, but I haven't
> seen any updates on the pypowervm pull request in
> https://github.com/powervm/pypowervm/pull/1
>
> Can someone contact that team and ask them to merge it and push a new
> release?

Done. over on #openstack-nova

[11:40:49]  thorst : Anyone around to merge this PR and cut a
release for pypowervm? https://github.com/powervm/pypowervm/pull/1
[11:41:20]  dims: Yeah, we're working on getting a new rev of
that.  It'd be pypowervm 1.0.0.4.1 to take in the new req.  adreznec
is working on it.
[11:41:32]  great thanks thorst and adreznec
[11:41:42]  dims: thorst Yep, just working on that now
[11:41:44]  thx for letting us know!


>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][swg] per-project "Business only" moderated mailing lists

2017-03-01 Thread Doug Hellmann
Excerpts from Clint Byrum's message of 2017-03-01 10:03:34 -0800:
> Excerpts from Jonathan Bryce's message of 2017-03-01 11:49:38 -0600:
> > 
> > > On Feb 28, 2017, at 4:25 AM, Thierry Carrez  wrote:
> > > 
> > > Clint Byrum wrote:
> >  So, I'll ask more generally: do you believe that the single 
> >  openstack-dev
> >  mailing list is working fine and we should change nothing? If not, what
> >  problems has it created for you? 
> > >>> 
> > >>> As a person who sends a lot of process-driven email to this list,
> > >>> it is not working for my needs to communicate with others.
> > >>> 
> > >>> Over the past few cycles when I was the release PTL, I always had
> > >>> a couple of PTLs say there was too much email on this list for them
> > >>> to read, and that they had not read my instructions for managing
> > >>> releases. That resulted in us having to train folks at the last
> > >>> minute, remind them of deadlines, deal with them missing deadlines,
> > >>> and otherwise increased the release team's workload.
> > >>> 
> > >>> It is possible the situation will improve now that the automation
> > >>> work is mostly complete and we expect to see fewer significant
> > >>> changes in the release workflow. That still leaves quite a few
> > >>> people regularly surprised by deadlines, though.
> > >> 
> > >> The problem above is really the krux of it. Whether or not you can keep
> > >> up with the mailing list can be an unknown, unknown. Even now, those
> > >> who can't actually handle the mailing list traffic are in fact likely
> > >> missing this thread about whether or not people can handle the mailing
> > >> list traffic (credit fungi for pointing out this irony to me on IRC).
> > > 
> > > Right, the main issue (for me) is that there is no unique way to reach
> > > out to people that you're 100% sure they will read. For some the miracle
> > > solution will be a personal email, for some it will be an IRC ping, for
> > > some it will be a Twitter private message. There is no 100% sure
> > > solution, and everyone prioritizes differently. The burden of reaching
> > > out and making sure the message was acknowledged is on the person who
> > > sends the message, and that just doesn't scale past 50 teams. That
> > > includes release team communications to PTLs, but also things like
> > > election nomination deadlines and plenty of other things.
> > 
> > Clint asked if there were specific issues in the workflow, and one item 
> > both Thierry and Doug have identified is reaching ALL project leaders 
> > consistently with important notifications or requests. I have also seen 
> > some working group leaders and Foundation staff experience similar 
> > difficulties. Perhaps creating a business-oriented list for PTLs similar to 
> > docs/infra that could help with that particular problem.
> 
> Agreed. I think I may have even missed the krux of the reason for
> the business lists, which was more "how do we get an important signal
> through".
> 
> IMO this is where the announcement list would be useful. But that has
> become something else entirely with release notifications (or it hasn't,
> I don't know, I dropped it). But generally projects do have a low
> traffic higher-priority list for announcements.
> 

Release announcements have moved to a separate list (creatively named
"release-announce" --
http://lists.openstack.org/cgi-bin/mailman/listinfo/release-announce).

Doug

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] state of the stable/mitaka branches

2017-03-01 Thread Jay Faulkner

> On Mar 1, 2017, at 11:15 AM, Pavlo Shchelokovskyy 
>  wrote:
> 
> Greetings ironicers,
> 
> I'd like to discuss the state of the gates in ironic and other related 
> projects for stable/mitaka branch.
> 
> Today while making some test patches to old branches I discovered the 
> following problems:
> 
> python-ironicclient/stable/mitaka
> All unit-test-like jobs are broken due to not handling upper constraints. 
> Because of it a newer than actually supported python-openstackclient is 
> installed, which already lacks some modules python-ironicclient tries to 
> import (these were moved to osc-lib).
> I've proposed a patch that copies current way of dealing with upper 
> constraints in tox envs [0], gates are passing.
> 
> ironic/stable/mitaka
> While not actually being gated on, using virtualbmc+ipmitool drivers is 
> broken. The reason is again related to upper constraints as what happens is 
> old enough version of pyghmi (from mitaka upper constraints) is installed 
> with most recent virtualbmc (not in upper constraints), and those versions 
> are incompatible.
> This highlights a question whether we should propose virtualbmc to upper 
> constraints too to avoid such problems in the future.
> Meanwhile a quick fix would be to hard-code the supported virtualbmc version 
> in the ironic's devstack plugin for mitaka release.
> Although not strictly supported for Mitaka release, I'd like that 
> functionality to be working on stable/mitaka gates to test for upcoming 
> removal of *_ssh drivers.
> 
> I did not test other projects yet.
> 

I can attest jobs are broken for stable/mitaka on ironic-lib as well — our jobs 
build docs unconditionally, and ironic-lib had no docs in Mitaka.

-
Jay Faulkner
OSIC

> With all the above, the question is should we really fix the gates for the 
> mitaka branch now? According to OpenStack release page [1] the Mitaka release 
> will reach end-of-life on April 10, 2017.
> 
> [0] https://review.openstack.org/#/c/439742/
> [1] https://releases.openstack.org/#release-series
> 
> Cheers,
> Dr. Pavlo Shchelokovskyy
> Senior Software Engineer
> Mirantis Inc
> www.mirantis.com
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] state of the stable/mitaka branches

2017-03-01 Thread Pavlo Shchelokovskyy
Greetings ironicers,

I'd like to discuss the state of the gates in ironic and other related
projects for stable/mitaka branch.

Today while making some test patches to old branches I discovered the
following problems:

python-ironicclient/stable/mitaka
All unit-test-like jobs are broken due to not handling upper constraints.
Because of it a newer than actually supported python-openstackclient is
installed, which already lacks some modules python-ironicclient tries to
import (these were moved to osc-lib).
I've proposed a patch that copies current way of dealing with upper
constraints in tox envs [0], gates are passing.

ironic/stable/mitaka
While not actually being gated on, using virtualbmc+ipmitool drivers is
broken. The reason is again related to upper constraints as what happens is
old enough version of pyghmi (from mitaka upper constraints) is installed
with most recent virtualbmc (not in upper constraints), and those versions
are incompatible.
This highlights a question whether we should propose virtualbmc to upper
constraints too to avoid such problems in the future.
Meanwhile a quick fix would be to hard-code the supported virtualbmc
version in the ironic's devstack plugin for mitaka release.
Although not strictly supported for Mitaka release, I'd like that
functionality to be working on stable/mitaka gates to test for upcoming
removal of *_ssh drivers.

I did not test other projects yet.

With all the above, the question is should we really fix the gates for the
mitaka branch now? According to OpenStack release page [1] the Mitaka
release will reach end-of-life on April 10, 2017.

[0] https://review.openstack.org/#/c/439742/
[1] https://releases.openstack.org/#release-series

Cheers,
Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PBR 2.0.0 release *may* cause gate failures

2017-03-01 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2017-03-01 14:04:19 -0500:
> Excerpts from Andreas Jaeger's message of 2017-03-01 19:50:51 +0100:
> > On 2017-03-01 17:13, Doug Hellmann  wrote:
> > > Excerpts from Andreas Jaeger's message of 2017-03-01 16:22:24 +0100:
> > >> On 2017-03-01 06:26, Tony Breeds  wrote:
> > >>> Hi All,
> > >>> Earlier today the release team tagged PBR 2.0.0.  The reason for 
> > >>> the major
> > >>> version bump is because warnerrors has been removed in favor of
> > >>> warning-is-error from sphinx >= 1.5.0.
> > >>
> > >> Can we change the sphinx==1.3.6 line in upper-constraints now?
> > > 
> > > We're currently running into failures updating requirements because of
> > > the pbr cap in several libraries.
> > > 
> > > I see requestsexceptions, sqlalchemy-migrate, yaql, fairy-slipper,
> > > pypowervm, gnocchiclient, reno, and aodhclient mentioned in
> > > http://logs.openstack.org/88/439588/2/check/gate-requirements-tox-py27-check-uc-ubuntu-xenial/97625f3/console.html
> > > for example.
> > > 
> > > I'm working on reno right now.
> > 
> > thanks
> > 
> > fairy-slipper is retired, let's remove it -
> > https://review.openstack.org/439769
> > 
> > Andreas
> 
> Based on the nature of the failures, I think we're going to have to get
> releases of all of the other projects up, and then prepare one patch in
> the requirements repo to update constraints all at one time.
> 
> Doug

Most of the projects involved are trying to take action, but I haven't
seen any updates on the pypowervm pull request in
https://github.com/powervm/pypowervm/pull/1

Can someone contact that team and ask them to merge it and push a new
release?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PBR 2.0.0 release *may* cause gate failures

2017-03-01 Thread Doug Hellmann
Excerpts from Andreas Jaeger's message of 2017-03-01 19:50:51 +0100:
> On 2017-03-01 17:13, Doug Hellmann  wrote:
> > Excerpts from Andreas Jaeger's message of 2017-03-01 16:22:24 +0100:
> >> On 2017-03-01 06:26, Tony Breeds  wrote:
> >>> Hi All,
> >>> Earlier today the release team tagged PBR 2.0.0.  The reason for the 
> >>> major
> >>> version bump is because warnerrors has been removed in favor of
> >>> warning-is-error from sphinx >= 1.5.0.
> >>
> >> Can we change the sphinx==1.3.6 line in upper-constraints now?
> > 
> > We're currently running into failures updating requirements because of
> > the pbr cap in several libraries.
> > 
> > I see requestsexceptions, sqlalchemy-migrate, yaql, fairy-slipper,
> > pypowervm, gnocchiclient, reno, and aodhclient mentioned in
> > http://logs.openstack.org/88/439588/2/check/gate-requirements-tox-py27-check-uc-ubuntu-xenial/97625f3/console.html
> > for example.
> > 
> > I'm working on reno right now.
> 
> thanks
> 
> fairy-slipper is retired, let's remove it -
> https://review.openstack.org/439769
> 
> Andreas

Based on the nature of the failures, I think we're going to have to get
releases of all of the other projects up, and then prepare one patch in
the requirements repo to update constraints all at one time.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PBR 2.0.0 release *may* cause gate failures

2017-03-01 Thread Andreas Jaeger
On 2017-03-01 17:13, Doug Hellmann  wrote:
> Excerpts from Andreas Jaeger's message of 2017-03-01 16:22:24 +0100:
>> On 2017-03-01 06:26, Tony Breeds  wrote:
>>> Hi All,
>>> Earlier today the release team tagged PBR 2.0.0.  The reason for the 
>>> major
>>> version bump is because warnerrors has been removed in favor of
>>> warning-is-error from sphinx >= 1.5.0.
>>
>> Can we change the sphinx==1.3.6 line in upper-constraints now?
> 
> We're currently running into failures updating requirements because of
> the pbr cap in several libraries.
> 
> I see requestsexceptions, sqlalchemy-migrate, yaql, fairy-slipper,
> pypowervm, gnocchiclient, reno, and aodhclient mentioned in
> http://logs.openstack.org/88/439588/2/check/gate-requirements-tox-py27-check-uc-ubuntu-xenial/97625f3/console.html
> for example.
> 
> I'm working on reno right now.

thanks

fairy-slipper is retired, let's remove it -
https://review.openstack.org/439769

Andreas
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Status of Zuul v3

2017-03-01 Thread Robyn Bergeron
Greetings!

Welcome to the first-ever Zuul v3 status update. :)

This periodic update is primarily intended as a way to keep
contributors to the OpenStack community apprised of Zuul v3 project
status, including future changes and milestones on our way to use in
production. Additionally, the numerous existing and future users of
Zuul outside of the OpenStack community may find this update useful as
a way to track Zuul v3 development status. This status update email is
anticipated to be sent on an approximately bi-weekly basis, though
more frequent updates may occur as we approach in-production dates.

== Wait, what’s going on? ==

Over the coming months, OpenStack’s infrastructure team will be
migrating Zuul, OpenStack’s project gating and automation system, from
Zuul v2.5 to Zuul v3. There are significant changes associated with
this migration, particularly with regards to how jobs are written, as
one of those changes is the move from JJB to Ansible. Detailed
information about Zuul v3, including specs, motivation, and community
processes, is listed below the “project status and updates” portion of
this mail.

Significant dates, milestones, and changes directly affecting
contributors in the OpenStack community will be announced on the
appropriate mailing lists for OpenStack (primarily openstack-dev and
openstack-infra), as well as in future versions of this project status
update.

And for those wondering:
* Zuul v3 is NOT YET USED IN PRODUCTION. By anyone. OpenStack’s
infrastructure team has a test deployment up and running for use in
during development (http://zuulv3-dev.openstack.org). OpenStack’s
production version of Zuul currently uses Zuul 2.5.
* No, you will not have to rewrite all your jobs. We will be
automagically converting existing jobs into Ansible jobs, and
collaborating directly with project teams in OpenStack in an orderly
fashion as we approach the move to in-production status. More on this
below.
* Yes, this email will be less verbose in the future. :)

== Zuul v3 project status and updates ==

** It’s alive! **

Last week on Tuesday at the inaugural OpenStack PTG (Feb. 20-24,
2017), for the very first time, Zuul v3 was brought online in a test
environment, and a simple hello world job was successfully executed on
a single node. That job was immediately followed by a successful
“hello worlds” job on multiple nodes. That’s right: Zuul v3 went from
single-node to multi-node functionality in a matter of minutes! There
was much rejoicing! A super exciting time for the folks who have been
working towards this moment. (By the way: if you, fine reader, know
anyone who has been hard at work on Zuul v3, send them your
congratulations. It’s a Big Deal!)

Even though the Infra team’s official days at the PTG were Monday and
Tuesday, folks continued hacking on Zuul throughout the rest of the
week, getting general tox-based jobs running, and enabling the ability
to collect and publish logs.

Monty Taylor, known to many of us as mordred, detailed these
accomplishments, and outlined the bigger picture of the future of
Zuul, in a blog post this week. For the extra-curious, since folks
have asked, "What does a job look when we are using Ansible?" -- he
has also graciously provided some example code snippets. Read more
here:  http://inaugust.com/posts/whats-coming-zuulv3.html

Also,

** FULL DISCLAIMER **

This may be obvious to some readers, given that this is a progress and
status update email, but for all the aspirational, daring folks out
there:

At this point, Zuul v3 is NOT yet secure or stable. You should not,
not, please, DO NOT ATTEMPT TO RUN IT IN PRODUCTION YOURSELF yet. It
is still under heavy development, and v3 is not currently used in
production for OpenStack. Zuul v3 content (jobs, etc.) should not
undergo heavy creation as syntax is still under development and
expected to change between now and release date.

We know you’re excited. We are too. Just be patient. The time will come. :)

Speaking of when that time will come, many folks have asked: When is
that time actually coming?

Commitment is hard, you see, especially when making it more awesome
for people to commit code :) The infra team expects this migration to
happen in the next few months. In the meantime: please know that the
infrastructure team intends to communicate milestones, information,
and significant dates of change to the openstack-dev mailing list, as
they would any other significant project infrastructure-related
changes. As we move closer to rolling out Zuul v3, significant dates
to be aware of, as well as reminders, will be announced broadly, and
we will be working with individual project teams to assist them in
migration, including the scripted conversion of existing jobs from JJB
to Ansible.  The vast quantity of documentation in OpenStack that
directly or indirectly relates to Zuul, including creating guides for
writing future jobs, will also be updated.

Upcoming tasks and focus:
* Config syntax, including reporting 

Re: [openstack-dev] [docs][release][ptl] Adding docs to the release schedule

2017-03-01 Thread Alexandra Settle


On 3/1/17, 5:58 PM, "John Dickinson"  wrote:



On 1 Mar 2017, at 9:52, Alexandra Settle wrote:

> Hi everyone,
>
> I would like to propose that we introduce a “Review documentation” period 
on the release schedule.
>
> We would formulate it as a deadline, so that it fits in the schedule and 
making it coincide with the RC1 deadline.
>
> For projects that are not following the milestones, we would translate 
this new inclusion literally, so if you would like your project to be 
documented at docs.o.o, then doc must be introduced and reviewed one month 
before the branch is cut.

Which docs are these? There are several different sets of docs that are 
hosted on docs.o.o that are managed within a project repo. Are you saying those 
won't get pushed to
docs.o.o if they are patched within a month of the cycle release?

The only sets of docs that are published on the docs.o.o site that are managed 
in project-specific repos is the project-specific installation guides. That 
management is entirely up to the team themselves, but I would like to push for 
the integration of a “documentation review” period to ensure that those teams 
are reviewing their docs in their own tree. 

This is a preferential suggestion, not a demand. I cannot make you review your 
documentation at any given period.

The ‘month before’ that I refer to would be for introduction of documentation 
and a review period. I will not stop any documentation being pushed to the repo 
unless, of course, it is untested and breaks the installation process. 


>
> In the last week since we released Ocata, it has become increasingly 
apparent that the documentation was not updated from the development side. We 
were not aware of a lot of new enhancements, features, or major bug fixes for 
certain projects. This means we have released with incorrect/out-of-date 
documentation. This is not only an unfortunately bad reflection on our team, 
but on the project teams themselves.
>
> The new inclusion to the schedule may seem unnecessary, but a lot of 
people rely on this and the PTL drives milestones from this schedule.
>
> From our side, I endeavor to ensure our release managers are working 
harder to ping and remind doc liaisons and PTLs to ensure the documentation is 
appropriately updated and working to ensure this does not happen in the future.
>
> Thanks,
>
> Alex


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][swg] per-project "Business only" moderated mailing lists

2017-03-01 Thread Clint Byrum
Excerpts from Jonathan Bryce's message of 2017-03-01 11:49:38 -0600:
> 
> > On Feb 28, 2017, at 4:25 AM, Thierry Carrez  wrote:
> > 
> > Clint Byrum wrote:
>  So, I'll ask more generally: do you believe that the single openstack-dev
>  mailing list is working fine and we should change nothing? If not, what
>  problems has it created for you? 
> >>> 
> >>> As a person who sends a lot of process-driven email to this list,
> >>> it is not working for my needs to communicate with others.
> >>> 
> >>> Over the past few cycles when I was the release PTL, I always had
> >>> a couple of PTLs say there was too much email on this list for them
> >>> to read, and that they had not read my instructions for managing
> >>> releases. That resulted in us having to train folks at the last
> >>> minute, remind them of deadlines, deal with them missing deadlines,
> >>> and otherwise increased the release team's workload.
> >>> 
> >>> It is possible the situation will improve now that the automation
> >>> work is mostly complete and we expect to see fewer significant
> >>> changes in the release workflow. That still leaves quite a few
> >>> people regularly surprised by deadlines, though.
> >> 
> >> The problem above is really the krux of it. Whether or not you can keep
> >> up with the mailing list can be an unknown, unknown. Even now, those
> >> who can't actually handle the mailing list traffic are in fact likely
> >> missing this thread about whether or not people can handle the mailing
> >> list traffic (credit fungi for pointing out this irony to me on IRC).
> > 
> > Right, the main issue (for me) is that there is no unique way to reach
> > out to people that you're 100% sure they will read. For some the miracle
> > solution will be a personal email, for some it will be an IRC ping, for
> > some it will be a Twitter private message. There is no 100% sure
> > solution, and everyone prioritizes differently. The burden of reaching
> > out and making sure the message was acknowledged is on the person who
> > sends the message, and that just doesn't scale past 50 teams. That
> > includes release team communications to PTLs, but also things like
> > election nomination deadlines and plenty of other things.
> 
> Clint asked if there were specific issues in the workflow, and one item both 
> Thierry and Doug have identified is reaching ALL project leaders consistently 
> with important notifications or requests. I have also seen some working group 
> leaders and Foundation staff experience similar difficulties. Perhaps 
> creating a business-oriented list for PTLs similar to docs/infra that could 
> help with that particular problem.

Agreed. I think I may have even missed the krux of the reason for
the business lists, which was more "how do we get an important signal
through".

IMO this is where the announcement list would be useful. But that has
become something else entirely with release notifications (or it hasn't,
I don't know, I dropped it). But generally projects do have a low
traffic higher-priority list for announcements.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][release][ptl] Adding docs to the release schedule

2017-03-01 Thread John Dickinson


On 1 Mar 2017, at 9:52, Alexandra Settle wrote:

> Hi everyone,
>
> I would like to propose that we introduce a “Review documentation” period on 
> the release schedule.
>
> We would formulate it as a deadline, so that it fits in the schedule and 
> making it coincide with the RC1 deadline.
>
> For projects that are not following the milestones, we would translate this 
> new inclusion literally, so if you would like your project to be documented 
> at docs.o.o, then doc must be introduced and reviewed one month before the 
> branch is cut.

Which docs are these? There are several different sets of docs that are hosted 
on docs.o.o that are managed within a project repo. Are you saying those won't 
get pushed to docs.o.o if they are patched within a month of the cycle release?


>
> In the last week since we released Ocata, it has become increasingly apparent 
> that the documentation was not updated from the development side. We were not 
> aware of a lot of new enhancements, features, or major bug fixes for certain 
> projects. This means we have released with incorrect/out-of-date 
> documentation. This is not only an unfortunately bad reflection on our team, 
> but on the project teams themselves.
>
> The new inclusion to the schedule may seem unnecessary, but a lot of people 
> rely on this and the PTL drives milestones from this schedule.
>
> From our side, I endeavor to ensure our release managers are working harder 
> to ping and remind doc liaisons and PTLs to ensure the documentation is 
> appropriately updated and working to ensure this does not happen in the 
> future.
>
> Thanks,
>
> Alex


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [docs][release][ptl] Adding docs to the release schedule

2017-03-01 Thread Alexandra Settle
Hi everyone,

I would like to propose that we introduce a “Review documentation” period on 
the release schedule.

We would formulate it as a deadline, so that it fits in the schedule and making 
it coincide with the RC1 deadline.

For projects that are not following the milestones, we would translate this new 
inclusion literally, so if you would like your project to be documented at 
docs.o.o, then doc must be introduced and reviewed one month before the branch 
is cut.

In the last week since we released Ocata, it has become increasingly apparent 
that the documentation was not updated from the development side. We were not 
aware of a lot of new enhancements, features, or major bug fixes for certain 
projects. This means we have released with incorrect/out-of-date documentation. 
This is not only an unfortunately bad reflection on our team, but on the 
project teams themselves.

The new inclusion to the schedule may seem unnecessary, but a lot of people 
rely on this and the PTL drives milestones from this schedule.

From our side, I endeavor to ensure our release managers are working harder to 
ping and remind doc liaisons and PTLs to ensure the documentation is 
appropriately updated and working to ensure this does not happen in the future.

Thanks,

Alex

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][swg] per-project "Business only" moderated mailing lists

2017-03-01 Thread Jonathan Bryce

> On Feb 28, 2017, at 4:25 AM, Thierry Carrez  wrote:
> 
> Clint Byrum wrote:
 So, I'll ask more generally: do you believe that the single openstack-dev
 mailing list is working fine and we should change nothing? If not, what
 problems has it created for you? 
>>> 
>>> As a person who sends a lot of process-driven email to this list,
>>> it is not working for my needs to communicate with others.
>>> 
>>> Over the past few cycles when I was the release PTL, I always had
>>> a couple of PTLs say there was too much email on this list for them
>>> to read, and that they had not read my instructions for managing
>>> releases. That resulted in us having to train folks at the last
>>> minute, remind them of deadlines, deal with them missing deadlines,
>>> and otherwise increased the release team's workload.
>>> 
>>> It is possible the situation will improve now that the automation
>>> work is mostly complete and we expect to see fewer significant
>>> changes in the release workflow. That still leaves quite a few
>>> people regularly surprised by deadlines, though.
>> 
>> The problem above is really the krux of it. Whether or not you can keep
>> up with the mailing list can be an unknown, unknown. Even now, those
>> who can't actually handle the mailing list traffic are in fact likely
>> missing this thread about whether or not people can handle the mailing
>> list traffic (credit fungi for pointing out this irony to me on IRC).
> 
> Right, the main issue (for me) is that there is no unique way to reach
> out to people that you're 100% sure they will read. For some the miracle
> solution will be a personal email, for some it will be an IRC ping, for
> some it will be a Twitter private message. There is no 100% sure
> solution, and everyone prioritizes differently. The burden of reaching
> out and making sure the message was acknowledged is on the person who
> sends the message, and that just doesn't scale past 50 teams. That
> includes release team communications to PTLs, but also things like
> election nomination deadlines and plenty of other things.

Clint asked if there were specific issues in the workflow, and one item both 
Thierry and Doug have identified is reaching ALL project leaders consistently 
with important notifications or requests. I have also seen some working group 
leaders and Foundation staff experience similar difficulties. Perhaps creating 
a business-oriented list for PTLs similar to docs/infra that could help with 
that particular problem.

Jonathan



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Proposing Michal Gershenzon to the core team

2017-03-01 Thread Sharat Sharma
+1. Although I am not a core member, I’ve known her from IRC and got a chance 
to meet in person recently and I would like her in the core.

Regards,
sharatss

From: lương hữu tuấn [mailto:tuantulu...@gmail.com]
Sent: Wednesday, March 01, 2017 10:38 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [mistral] Proposing Michal Gershenzon to the core 
team

+1 for me. Sorry Michal if you read this thread, I thought that you are a man 
by calling you Mike :d.

Tuan/Nokia

On Mar 1, 2017 6:04 PM, "Dougal Matthews" 
> wrote:


On 1 March 2017 at 16:47, Renat Akhmerov 
> wrote:
Hi,

Based on the stats of Michal Gershenzon in Ocata cycle I’d like to promote her 
to the core team.
Michal works at Nokia CloudBand and being a CloudBand engineer she knows 
Mistral very well
as a user and behind the scenes helped find a lot of bugs and make countless 
number of
improvements, especially in performance.

Overall, she is a deep thinker, cares about details, always has an unusual 
angle of view on any
technical problem. She is one of a few people that I’m aware of who I could 
call a Mistral expert.
She also participates in almost every community meeting in IRC.

In Ocata she improved her statistics pretty significantly (e.g. ~60 reviews 
although the cycle was
very short) and is keeping up the good pace now. Also, Michal is officially 
planning to allocate
more time for upstream development in Pike

I believe Michal would be a great addition for the Mistral core team.

Please let me know if you agree with that.

+1, I think Michal would be a great addition to the core team.


Thanks

[1] 
http://stackalytics.com/?module=mistral-group=ocata_id=michal-gershenzon

Renat Akhmerov
@Nokia


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] The OpenStack Summit is returning to Vancouver in May 2018

2017-03-01 Thread Allison Price
Hi everyone, 

Back by popular demand, the OpenStack Summit is returning to Vancouver 
, BC 
, 
 BC from May 21-24, 2018. 
Registration, sponsorship opportunities and more information for the 17th 
OpenStack Summit will be available in the upcoming months. 

Can’t wait until 2018? Brush up on your OpenStack skills in 2017 by registering 
to attend the OpenStack Summit Boston 
, May 8-11 and marking your 
calendar for the OpenStack Summit Sydney 
, November 6-8.

For news on upcoming OpenStack Summits, visit openstack.org/summit 
. 


Cheers,
Allison

Allison Price
OpenStack Foundation
alli...@openstack.org __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Zuul v3 - What's Coming: What to expect with the Zuul v3 Rollout

2017-03-01 Thread Clark Boylan
Yes. It is also worth noting you can do this today too and some
projects/tests do.

Clark

On Wed, Mar 1, 2017, at 09:31 AM, Kosnik, Lubosz wrote:
> So did I understand that properly. There will be possibility to create
> real multi-node tests like with 3-4 nodes?
> 
> Cheers,
> Lubosz
> 
> On Feb 28, 2017, at 7:13 PM, joehuang
> > wrote:
> 
> So cool! Look forward to multi-node jobs as first class
> 
> Best Regards
> Chaoyi Huang (joehuang)
> 
> 
> From: Monty Taylor [mord...@inaugust.com]
> Sent: 01 March 2017 7:26
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] Zuul v3 - What's Coming: What to expect with the
>   Zuul v3 Rollout
> 
> Hi everybody!
> 
> This content can also be found at
> http://inaugust.com/posts/whats-coming-zuulv3.html - but I've pasted it
> in here directly because I know that some folks don't like clicking
> links.
> 
> tl;dr - At last week's OpenStack PTG, the OpenStack Infra team ran the
> first Zuul v3 job, so it's time to start getting everybody ready for
> what's coming
> 
> **Don't Panic!** Awesome changes are coming, but you are NOT on the hook
> for rewriting all of your project's gate jobs or anything crazy like
> that. Now grab a seat by the fire, pour yourself a drink while I spin a
> yarn about days gone by and days yet to come.
> 
> First, some background
> 
> The OpenStack Infra team has been hard at work for quite a while on a
> new version of Zuul (where by 'quite some time' I mean that Jim Blair
> and I had our first Zuul v3 design whiteboarding session in 2014). As
> you might be able to guess given the amount of time, there are some big
> things coming that will have a real and visible impact on the OpenStack
> community and beyond. Since we have a running Zuul v3 now [1], it seemed
> like the time to start getting folks up to speed on what to expect.
> 
> There is other deep-dive information on architecture and rationale if
> you're interested[2], but for now we'll focus on what's relevant for end
> users. We're also going to start sending out a bi-weekly "Status of Zuul
> v3" email to the
> openstack-dev@lists.openstack.org
> mailing list ... so
> stay tuned!
> 
> **Important Note** This post includes some code snippets - but v3 is
> still a work in progress. We know of at least one breaking change that
> is coming to the config format, so please treat this not as a tutorial,
> but as a conceptual overview. Syntax is subject to change.
> 
> The Big Ticket Items
> 
> While there are a bunch of changes behind the scenes, there are a
> reasonably tractable number of user-facing differences.
> 
> * Self-testing In-Repo Job Config
> * Ansible Job Content
> * First-class Multi-node Jobs
> * Improved Job Reuse
> * Support for non-OpenStack Code and Node Systems
> * and Much, Much More
> 
> Self-testing In-Repo Job Config
> 
> This is probably the biggest deal. There are a lot of OpenStack Devs
> (around 2k in Ocata) and a lot of repositories (1689) There a lot fewer
> folks on the project-config-core team who are the ones who review all of
> the job config changes (please everyone thank Andreas Jaeger next time
> you see him). That's not awesome.
> 
> Self-testing in-repo job config is awesome.
> 
> Many systems out there these days have an in-repo job config system.
> Travis CI has had it since day one, and Jenkins has recently added
> support for a Jenkinsfile inside of git repos. With Zuul v3, we'll have
> it too.
> 
> Once we roll out v3 to everyone, as a supplement to jobs defined in our
> central config repositories, each project will be able to add a
> zuul.yaml file to their own repo:
> 
> 
> - job:
>name: my_awesome_job
>nodes:
>  - name: controller
>label: centos-7
> 
> - project:
>name: openstack/awesome_project
>check:
>  jobs:
>- my_awesome_job
> 
> It's a small file, but there is a lot going on, so let's unpack it.
> 
> First we define a job to run. It's named my_awesome_job and it needs one
> node. That node will be named controller and will be based on the
> centos-7 base node in nodepool.
> 
> In the next section, we say that we want to run that job in the check
> pipeline, which in OpenStack is defined as the jobs that run when
> patchsets are proposed.
> 
> And it's also self-testing!
> 
> Everyone knows the fun game of writing a patch to the test jobs, getting
> it approved, then hoping it works once it starts running. With Zuul v3
> in-repo jobs, if there is a change to job definitions in a proposed
> patch, that patch will be tested with those changes applied. And since
> it's Zuul, Depends-On footers are honored as well - so iteration on
> getting a test job right becomes just like iterating on any other patch
> or sequence of patches.
> 
> Ansible Job Content
> 
> The job my_awesome_job isn't very 

[openstack-dev] Newton: not able to login via public key

2017-03-01 Thread Amit Uniyal
Hi all,

I have installed a newton openstack, not able to login into machines via
private keys.

I followed this guide  https://docs.openstack.org/
newton/install-guide-ubuntu/

Configure the metadata agent¶


The metadata agent

provides
configuration information such as credentials to instances.

   -

   Edit the /etc/neutron/metadata_agent.ini file and complete the following
   actions:
   -

  In the [DEFAULT] section, configure the metadata host and shared
  secret:

  [DEFAULT]...nova_metadata_ip =
controllermetadata_proxy_shared_secret = METADATA_SECRET

  Replace METADATA_SECRET with a suitable secret for the metadata proxy.




I think region name should also be included here, I tried

RegionName = RegionOne

and then restarted even whole controller node (as it doesn't work by only
restarting neutron meta-agent service)


Another thing is on checking neutron agent-list status, I am not getting
any availiability zone for mata-agent is it fine?


Regards
Amit
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Zuul v3 - What's Coming: What to expect with the Zuul v3 Rollout

2017-03-01 Thread Kosnik, Lubosz
So did I understand that properly. There will be possibility to create real 
multi-node tests like with 3-4 nodes?

Cheers,
Lubosz

On Feb 28, 2017, at 7:13 PM, joehuang 
> wrote:

So cool! Look forward to multi-node jobs as first class

Best Regards
Chaoyi Huang (joehuang)


From: Monty Taylor [mord...@inaugust.com]
Sent: 01 March 2017 7:26
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] Zuul v3 - What's Coming: What to expect with the   
Zuul v3 Rollout

Hi everybody!

This content can also be found at
http://inaugust.com/posts/whats-coming-zuulv3.html - but I've pasted it
in here directly because I know that some folks don't like clicking links.

tl;dr - At last week's OpenStack PTG, the OpenStack Infra team ran the
first Zuul v3 job, so it's time to start getting everybody ready for
what's coming

**Don't Panic!** Awesome changes are coming, but you are NOT on the hook
for rewriting all of your project's gate jobs or anything crazy like
that. Now grab a seat by the fire, pour yourself a drink while I spin a
yarn about days gone by and days yet to come.

First, some background

The OpenStack Infra team has been hard at work for quite a while on a
new version of Zuul (where by 'quite some time' I mean that Jim Blair
and I had our first Zuul v3 design whiteboarding session in 2014). As
you might be able to guess given the amount of time, there are some big
things coming that will have a real and visible impact on the OpenStack
community and beyond. Since we have a running Zuul v3 now [1], it seemed
like the time to start getting folks up to speed on what to expect.

There is other deep-dive information on architecture and rationale if
you're interested[2], but for now we'll focus on what's relevant for end
users. We're also going to start sending out a bi-weekly "Status of Zuul
v3" email to the 
openstack-dev@lists.openstack.org 
mailing list ... so
stay tuned!

**Important Note** This post includes some code snippets - but v3 is
still a work in progress. We know of at least one breaking change that
is coming to the config format, so please treat this not as a tutorial,
but as a conceptual overview. Syntax is subject to change.

The Big Ticket Items

While there are a bunch of changes behind the scenes, there are a
reasonably tractable number of user-facing differences.

* Self-testing In-Repo Job Config
* Ansible Job Content
* First-class Multi-node Jobs
* Improved Job Reuse
* Support for non-OpenStack Code and Node Systems
* and Much, Much More

Self-testing In-Repo Job Config

This is probably the biggest deal. There are a lot of OpenStack Devs
(around 2k in Ocata) and a lot of repositories (1689) There a lot fewer
folks on the project-config-core team who are the ones who review all of
the job config changes (please everyone thank Andreas Jaeger next time
you see him). That's not awesome.

Self-testing in-repo job config is awesome.

Many systems out there these days have an in-repo job config system.
Travis CI has had it since day one, and Jenkins has recently added
support for a Jenkinsfile inside of git repos. With Zuul v3, we'll have
it too.

Once we roll out v3 to everyone, as a supplement to jobs defined in our
central config repositories, each project will be able to add a
zuul.yaml file to their own repo:


- job:
   name: my_awesome_job
   nodes:
 - name: controller
   label: centos-7

- project:
   name: openstack/awesome_project
   check:
 jobs:
   - my_awesome_job

It's a small file, but there is a lot going on, so let's unpack it.

First we define a job to run. It's named my_awesome_job and it needs one
node. That node will be named controller and will be based on the
centos-7 base node in nodepool.

In the next section, we say that we want to run that job in the check
pipeline, which in OpenStack is defined as the jobs that run when
patchsets are proposed.

And it's also self-testing!

Everyone knows the fun game of writing a patch to the test jobs, getting
it approved, then hoping it works once it starts running. With Zuul v3
in-repo jobs, if there is a change to job definitions in a proposed
patch, that patch will be tested with those changes applied. And since
it's Zuul, Depends-On footers are honored as well - so iteration on
getting a test job right becomes just like iterating on any other patch
or sequence of patches.

Ansible Job Content

The job my_awesome_job isn't very useful if it doesn't define any
content. That's done in the repo as well, in playbooks/my_awesome_job.yaml:


- hosts: controller
 tasks:
   - name: Run make tests
 shell: make distcheck

As previously mentioned, the job content is now defined in Ansible
rather than using our Jenkins Job Builder tool. This playbook is going
to run a tasks on a host called controller which you may remember we

Re: [openstack-dev] [mistral] Proposing Michal Gershenzon to the core team

2017-03-01 Thread lương hữu tuấn
+1 for me. Sorry Michal if you read this thread, I thought that you are a
man by calling you Mike :d.

Tuan/Nokia

On Mar 1, 2017 6:04 PM, "Dougal Matthews"  wrote:

>
>
> On 1 March 2017 at 16:47, Renat Akhmerov  wrote:
>
>> Hi,
>>
>> Based on the stats of Michal Gershenzon in Ocata cycle I’d like to
>> promote her to the core team.
>> Michal works at Nokia CloudBand and being a CloudBand engineer she knows
>> Mistral very well
>> as a user and behind the scenes helped find a lot of bugs and make
>> countless number of
>> improvements, especially in performance.
>>
>> Overall, she is a deep thinker, cares about details, always has an
>> unusual angle of view on any
>> technical problem. She is one of a few people that I’m aware of who I
>> could call a Mistral expert.
>> She also participates in almost every community meeting in IRC.
>>
>> In Ocata she improved her statistics pretty significantly (e.g. ~60
>> reviews although the cycle was
>> very short) and is keeping up the good pace now. Also, Michal is
>> officially planning to allocate
>> more time for upstream development in Pike
>>
>> I believe Michal would be a great addition for the Mistral core team.
>>
>> Please let me know if you agree with that.
>>
>
> +1, I think Michal would be a great addition to the core team.
>
>
>>
>> Thanks
>>
>> [1] http://stackalytics.com/?module=mistral-group=oc
>> ata_id=michal-gershenzon
>>
>> Renat Akhmerov
>> @Nokia
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Proposing Michal Gershenzon to the core team

2017-03-01 Thread Dougal Matthews
On 1 March 2017 at 16:47, Renat Akhmerov  wrote:

> Hi,
>
> Based on the stats of Michal Gershenzon in Ocata cycle I’d like to promote
> her to the core team.
> Michal works at Nokia CloudBand and being a CloudBand engineer she knows
> Mistral very well
> as a user and behind the scenes helped find a lot of bugs and make
> countless number of
> improvements, especially in performance.
>
> Overall, she is a deep thinker, cares about details, always has an unusual
> angle of view on any
> technical problem. She is one of a few people that I’m aware of who I
> could call a Mistral expert.
> She also participates in almost every community meeting in IRC.
>
> In Ocata she improved her statistics pretty significantly (e.g. ~60
> reviews although the cycle was
> very short) and is keeping up the good pace now. Also, Michal is
> officially planning to allocate
> more time for upstream development in Pike
>
> I believe Michal would be a great addition for the Mistral core team.
>
> Please let me know if you agree with that.
>

+1, I think Michal would be a great addition to the core team.


>
> Thanks
>
> [1] http://stackalytics.com/?module=mistral-group=
> ocata_id=michal-gershenzon
>
> Renat Akhmerov
> @Nokia
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Proposing Michal Gershenzon to the core team

2017-03-01 Thread Deja, Dawid
+1

Michal, remember that with great power comes great responsibility :)

Thanks,
Dawid

On Wed, 2017-03-01 at 19:47 +0300, Renat Akhmerov wrote:
Hi,

Based on the stats of Michal Gershenzon in Ocata cycle I’d like to promote her 
to the core team.
Michal works at Nokia CloudBand and being a CloudBand engineer she knows 
Mistral very well
as a user and behind the scenes helped find a lot of bugs and make countless 
number of
improvements, especially in performance.

Overall, she is a deep thinker, cares about details, always has an unusual 
angle of view on any
technical problem. She is one of a few people that I’m aware of who I could 
call a Mistral expert.
She also participates in almost every community meeting in IRC.

In Ocata she improved her statistics pretty significantly (e.g. ~60 reviews 
although the cycle was
very short) and is keeping up the good pace now. Also, Michal is officially 
planning to allocate
more time for upstream development in Pike

I believe Michal would be a great addition for the Mistral core team.

Please let me know if you agree with that.

Thanks

[1] 
http://stackalytics.com/?module=mistral-group=ocata_id=michal-gershenzon

Renat Akhmerov
@Nokia


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Proposing Michal Gershenzon to the core team

2017-03-01 Thread Renat Akhmerov
Hi,

Based on the stats of Michal Gershenzon in Ocata cycle I’d like to promote her 
to the core team.
Michal works at Nokia CloudBand and being a CloudBand engineer she knows 
Mistral very well
as a user and behind the scenes helped find a lot of bugs and make countless 
number of
improvements, especially in performance.

Overall, she is a deep thinker, cares about details, always has an unusual 
angle of view on any
technical problem. She is one of a few people that I’m aware of who I could 
call a Mistral expert.
She also participates in almost every community meeting in IRC.

In Ocata she improved her statistics pretty significantly (e.g. ~60 reviews 
although the cycle was
very short) and is keeping up the good pace now. Also, Michal is officially 
planning to allocate
more time for upstream development in Pike

I believe Michal would be a great addition for the Mistral core team.

Please let me know if you agree with that.

Thanks

[1] 
http://stackalytics.com/?module=mistral-group=ocata_id=michal-gershenzon
 


Renat Akhmerov
@Nokia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PBR 2.0.0 release *may* cause gate failures

2017-03-01 Thread Doug Hellmann
Excerpts from Andreas Jaeger's message of 2017-03-01 16:22:24 +0100:
> On 2017-03-01 06:26, Tony Breeds  wrote:
> > Hi All,
> > Earlier today the release team tagged PBR 2.0.0.  The reason for the 
> > major
> > version bump is because warnerrors has been removed in favor of
> > warning-is-error from sphinx >= 1.5.0.
> 
> Can we change the sphinx==1.3.6 line in upper-constraints now?

We're currently running into failures updating requirements because of
the pbr cap in several libraries.

I see requestsexceptions, sqlalchemy-migrate, yaql, fairy-slipper,
pypowervm, gnocchiclient, reno, and aodhclient mentioned in
http://logs.openstack.org/88/439588/2/check/gate-requirements-tox-py27-check-uc-ubuntu-xenial/97625f3/console.html
for example.

I'm working on reno right now.

Doug

> 
> Andreas
> 
> > It seems that several projects outside both inside and outside OpenStack 
> > have
> > capped pbr <2.0.0 so we can't actually use this release yet.  The 
> > requirements
> > team will work with all projects to remove the cap of pbr in those projects.
> > 
> > The good news is that projects using upper-constraints.txt are insulated 
> > from
> > this and shouldn't be affected[1].  However upper-constraints.txt isn't 
> > being used
> > by all projects and *those* projects will start seeing
> > 
> > ContextualVersionConflicts: (pbr 2.0.0 in gate logs.  It's recommended that
> > those projects add a local ban for pbr and associate it with:
> > https://bugs.launchpad.net/openstack-requirements/+bug/1668848
> > 
> > Then once the situation is resolved we can unwind and remove the temporary 
> > caps.
> > 
> > Yours Tony.
> > 
> > [1] There is at least 1 corner case where the coverage job installed 
> > directly
> > from a git URL and therefore that wasn't protected.
> > 
> > 
> > 
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] Monitoring script framework PoC

2017-03-01 Thread Major Hayden
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hey there,

One of the items discussed at the PTG[0] was the creation of some monitoring 
scripts in a framework that could monitor various OpenStack services, support 
services (RabbitMQ/Galera/etc), and some system basics. There's a spec 
proposed[1] that discusses the work in detail.

I've created a PoC repository[2] to demonstrate what the framework might look 
like.  It uses the 'click' Python module to automatically bring in new plugins 
and implement them into the framework.  This has some nice benefits:

  1) Minimal code to write to add a new check
  2) Minimal tests to write to add a new check
  3) All of the output formatting is handled/tested in the framework
  4) Argument/option handling is done in the framework

Kevin already dropped by and made a PR to improve some of the dynamic importing 
that makes the code easier to read.  I'd really love to get some feedback on it 
and see if it's useful for others.

[0] https://etherpad.openstack.org/p/osa-ptg-pike-monitoring
[1] https://review.openstack.org/#/c/436498/
[2] https://github.com/major/monitorstack

- --
Major Hayden
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQIcBAEBCAAGBQJYtu8/AAoJEHNwUeDBAR+x3FMP/0f38v8zcBVvKFfo9AtkZVUY
tSDyXyb+3zelFq8U07DnkzvOA7nFNB2DY8SXyGxCIgzSXGfs/fzSlncKc485p02I
1B9ak462trvrX6nwL9CNYWhnmuGo4+6yVNtPpIf13YOfsVPqCf3ikc401WlVkpHY
DDQQLC3TzzYWJCkNMgV4dZhiO1yRKNLbHVL2hEc/oMWxRTau4CS5tmLESC/b4AzX
pIC6xkPN1CRNJCsxqg1dihzAMG49fDhBqsh+Ej2EUfsf2opI4Rzc92Nw74rj2F4y
baFDDm0tYfkPqekiuKHLHi1BZlZDxf36FHqpck3civW+RbUZxE3uyNilg7akPAyX
rlwVddx6kPInWiU5e4beZ7s43MZffIdcieKVsTh069OdB6Ls81S8ciKkRM/4Vd5z
coFAwnhofzur+uEvhb9HHbudv1rYFPLTA+ZmzRzGcxF/zC50664HCvNyNYhod61T
ZsuDruYEtaDjTQ2jyTXQncBAzZVJPilp9TuZEan4eb3bI8t1WpXb1ayjTkdxbw9P
CnxRmjlC7HgBF7K4BEZiM6eEEOl34iXEhkPPLrKy0oGMUssFupHgmRerQKpzUL4G
1Z1Qfm9WDMhJu1aZhsK5beHeizJyRsBMmq8YnTSJPfzPN78rKDz4AfcJeS73Yo4r
406anxjIB40AP+80zQ2E
=+8CG
-END PGP SIGNATURE-

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][docs] Is anyone interested in being the docs liaison for Nova?

2017-03-01 Thread Matt Riedemann
There is a need for a liaison from Nova for the docs team to help with 
compute-specific docs in the install guide and various manuals.


For example, we documented placement and cells v2 in the nova devref in 
Ocata but instructions on those aren't in the install guide, so the docs 
team is adding that here [1].


I'm not entirely sure what the docs liaison role consists of, but I 
assume it at least means attending docs meetings, helping to review docs 
patches that are related to nova, helping to alert the docs team of big 
changes coming in a release that will impact the install guide, etc.


From my point of view, I've historically pushed nova developers to be 
documenting new features within the nova devref since it was "closer to 
home" and could be tied to landing said feature in the nova tree, so 
there was more oversight on the docs actually happening *somewhere* 
rather than a promise to work them in the non-nova manuals, which a lot 
of the time was lip service and didn't actually happen once the feature 
was in. But there is still the need for the install guide as the first 
step to deploying nova so we need to balance both things.


If no one else steps up for the docs liaison role, by default it lands 
on me, so I'd appreciate any help here.


[1] https://review.openstack.org/#/c/438328/

--

Thanks,

Matt Riedemann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [designate] Re:add new features in designate-dashboard

2017-03-01 Thread Hayes, Graham
> From: *Saju M* >
> Date: Tue, Feb 28, 2017 at 11:43 PM
> Subject: add new features in designate-dashboard
> To: gra...@hayes.ie , gra...@managedit.ie
> , graham.ha...@hp.com
> 
> Cc: anu sree >
>
>
> Hi,
>
> We would like to start working on following blueprints. Is it ok to
> create blueprint for this task ?.
> Please approve it, So we can start working on it.
>
> https://blueprints.launchpad.net/designate/+spec/zone-export-support-in-designate-dashboard
> 

Sure, but I would like to personally see a spec about how you intend to 
implement them. We use the specs process in
github.com/openstack/designate-specs - it does not need to very detailed
just an overview of where the panels will be, and what will be on each
form.

I also re-targeted them against the designate-dashboard [0] project.

> https://blueprints.launchpad.net/designate/+spec/zone-import-support-in-designate-dashboard
> 
>
> We care also planing to implement blacklists, Tsigkey, etc-- in dashboard.
> Can I create blueprint for these task ?
> Blueprint or bug, which is suitable for these tasks ?

These would be blueprints, again in the designate-dashboard project.

Thanks!

Graham

In the future, you will get more responses if you email the mailing 
list, and add '[designate]' to the subject - most of the developers have
filters that make sure we see these emails.

0 - https://blueprints.launchpad.net/designate-dashboard

> Thanks,
>
> Regards
> Saju Madhavan
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PBR 2.0.0 release *may* cause gate failures

2017-03-01 Thread Andreas Jaeger
On 2017-03-01 06:26, Tony Breeds  wrote:
> Hi All,
> Earlier today the release team tagged PBR 2.0.0.  The reason for the major
> version bump is because warnerrors has been removed in favor of
> warning-is-error from sphinx >= 1.5.0.

Can we change the sphinx==1.3.6 line in upper-constraints now?

Andreas

> It seems that several projects outside both inside and outside OpenStack have
> capped pbr <2.0.0 so we can't actually use this release yet.  The requirements
> team will work with all projects to remove the cap of pbr in those projects.
> 
> The good news is that projects using upper-constraints.txt are insulated from
> this and shouldn't be affected[1].  However upper-constraints.txt isn't being 
> used
> by all projects and *those* projects will start seeing
> 
> ContextualVersionConflicts: (pbr 2.0.0 in gate logs.  It's recommended that
> those projects add a local ban for pbr and associate it with:
> https://bugs.launchpad.net/openstack-requirements/+bug/1668848
> 
> Then once the situation is resolved we can unwind and remove the temporary 
> caps.
> 
> Yours Tony.
> 
> [1] There is at least 1 corner case where the coverage job installed directly
> from a git URL and therefore that wasn't protected.
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][address-scope] Questions about l3 address scope

2017-03-01 Thread zhi
Hi, all.

I have some questions about l3 address scope in neutron.I hope that someone
could give me some answers.

I set up a devstack environment and it uses the feature of l3 address scope
by following the document [1]. After doing those steps,  I can find some
iptables rules in namespace, showing like this:

root@devstack:~# iptables-save |grep neutron-l3-agent-scope
:neutron-l3-agent-scope - [0:0]
-A neutron-l3-agent-PREROUTING -j neutron-l3-agent-scope
-A neutron-l3-agent-scope -i qr-6d393225-2e -j MARK --set-xmark
0x401/0x
-A neutron-l3-agent-scope -i qr-d257abb8-e1 -j MARK --set-xmark
0x400/0x
-A neutron-l3-agent-scope -i qg-f64c7892-1d -j MARK --set-xmark
0x401/0x
:neutron-l3-agent-scope - [0:0]
-A neutron-l3-agent-FORWARD -j neutron-l3-agent-scope
-A neutron-l3-agent-scope -o qr-6d393225-2e -m mark ! --mark
0x401/0x -j DROP
-A neutron-l3-agent-scope -o qr-d257abb8-e1 -m mark ! --mark
0x400/0x -j DROP

What does these iptables rules used for ? In my opinion, by reading these
rules, I can get some informations : any input traffic ( qr and qg devices
) will be marked and we only accept these marked traffic, isn't it?

What the purpose of the l3 address scope?

What can we benefit from l3 address scope?


Thanks
Zhi Chang

[1]:
https://docs.openstack.org/draft/networking-guide/config-address-scopes.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment][TripleO][kolla][ansible][fuel] Next steps for cross project collaboration

2017-03-01 Thread Andy McCrae
On 28 February 2017 at 08:25, Flavio Percoco  wrote:

> On 28/02/17 08:01 +, Jesse Pretorius wrote:
>
>> On 2/28/17, 12:52 AM, "Michał Jastrzębski"  wrote:
>>
>>I think instead of adding yet-another-irc-channel how about create
>>weekly meetings? We can rant in scheduled time and it probably will
>>get more attention
>>
>> Happy to meet, in fact I think it’ll be important for keeping things on
>> track – however weekly is too often. I think once a month at most is
>> perfectly fine.
>>
>
> Yes, monthly prolly better than weekly in this cas (if we ever decide to
> have these meetings).


Agreed - monthly sounds like a good start, we can always see how it goes
and change up as required.
I think if the only change is that we have a WG added to the Wiki and a
[deployments] tag added for the ML,
then we can't really expect to change much or have a large impact.

I'd love to see this build some momentum and come up with useful outcomes,
which I think the PTG session
started really nicely. There are clearly quite a few common issues that we
can address better as a collective.

Andy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] Weekly meeting today canceled

2017-03-01 Thread Amrith Kumar
My apologies for the short notice but I have a conflict at 1pm and since we
had the mid-cycle last week, things are still relatively fresh in everyone's
mind. So, I'm going to cancel this weeks meeting and resume regular weekly
meetings next week.

In the interim, if you need something Trove related, the #openstack-trove
channel is your best bet.

-amrith

--
Amrith Kumar
amrith.ku...@gmail.com
+1-978-563-9590
GPG: 0x5e48849a9d21a29b



smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Sahara] Pike PTG summary

2017-03-01 Thread Telles Nobrega
Hello Saharans and other interested parties, here I'm going to try to
summarize our discussions at the PTG and our main actions to continue
improving Sahara in this next cycle.

The whole etherpad can be found here: sahara-ptg-pike


Python 3.5 Migration
---
This is a community goal and being so we from Sahara are working on meeting
it. We already have support on tests for Python 3 and for Pike release we
aim to have tempest and scenario in conformity with Python 3.5 but CI may
take a little longer to have it working.

Control Pane API endpoints deployment via WSGI
-
This is also a community goal that we currently support but we still need
to add scenario tests to validate it. Once the test is done and passing we
can announce our support.

API v2 Improvements
-
API v2 is a WIP that is being on Sahara's todo list for about 2 cycles now.
We aim to deliver as much as possible on Pike release so we made it into
one of our High Priority features. This work is splitted into different
major features and some smaller changes. Our goal is to deliver at least 3
of the major features in Pike and have a number of smaller changes
delivered. We can't guarantee that it will be available in Pike but we want
to have it at a stage that will allow us to finish it up and test it to
deliver it in Queen.

Plugins Updates
-
Keeping plugins up to date is our major priority to allow users to have the
best services running. In Pike we will continue this work and will update
Storm, Spark and CDH to the newest version and also deprecate CDH 5.5.0 and
5.7.0 and Storm 0.9.2

Sahara Tests
---
It was proposed the creation of an API for sahara-scenario in order to
allow integration with other frameworks. This work is not high priority but
our idea is to implement a base class that will allow the instantiation of
sahara-scenario to be run.

It was also suggested that we auto-upload images to a CDN. This is a low
priority work but a very important one that will allow users to have easier
access to fully working sahara images. The goal is to have a montlhy job to
create images and upload them.

Other proposed topic was integration of Manila test at the gate. We found
that since the gate don't work with real plugins this might be an issue and
suggested that it could be tested on Jenkis gate using Spark or Vanilla
plugin. Also we need to take a look on how multinode devstack can be set up
at the gate in order to have more resources available.

Other Topics
-
S3 Datasource integration
---
We are going through a major refactoring on datasource to allow it to be
more pluggable and once this work is done we intend to have integration
with S3 datasources.

Allow admin to use Sahara API to query/manage all projects

First on this topic is understand what powers and admin role should have on
an OpenStack cloud and we intend to allow it to query all cluster at first
and work on management later. Also we intend to update our policy
implementation to user policy in code feature.

Force delete cluster for Sahara database
---
We have some issues where clusters sometimes gets into a limbo mode and it
won't be deleted and it just is useless. We want to allow a force removal
of it from Sahara database along side with a call for nova to remove its
instances.
This issue can be related to trust and we need to check that first and
decide what is the best action to take.
Also a new state was suggested, DELETE_FAILED to tell the user that real
state of the cluster.

Refactoring CDH plugin
---
CDH plugin is currently one of our high maintenance plugins and is a very
important one. Updating it today takes a lot of copy/paste work. We intend
to refactor the CDH plugin code to allow an easier update/deprecate work on
the future. We are removing the code for versions 5.0.0, 5.3.0, 5.4.0.

Here is our priorization of our goals for Pike:


   - High:


   - Keep plugins up to date


   - Refactoring of CDH


   - Land pluggability refactoring


   - Testing:


   - Python 3.5


   - WSGI goal


   - Api v2


   - S3 datasource integration


   - Medium


   - Manila testing and integration


   - [tosky] this may require uploading the images to CDN?


   - Allow admin to query all projects (listing part)


   - API for sahara-scenario for framework integration


   - Low:


   - Uploading images to CDN



Thanks all,
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [ptls] PTG Team Photos!

2017-03-01 Thread Kendall Nelson
Hello :)

You should be able to download it from here:
https://m.flickr.com/#/photos/152419717@N06/sets/72157680602754246/

Let me know if there is anything else you need.

-Kendall

On Wed, Mar 1, 2017, 7:30 AM Telles Nobrega  wrote:

> Hi Kendall,
>
> can you send me the Sahara team photo?
>
> Thanks
>
> On Tue, Feb 21, 2017 at 11:43 AM Kendall Nelson 
> wrote:
>
> To be a little more specific about the location. It's just outside the
> Grand ballroom A. Close to the top of the staircase.
>
> - Kendall Nelson (diablo_rojo)
>
>
> On Wed, Feb 15, 2017, 1:24 PM Kendall Nelson 
> wrote:
>
> Hello All!
>
> We are excited to see you next week at the PTG and wanted to share
> that we will be taking team photos! Provided is a google sheet signup for
> the available time slots [1]. We will be providing time on Tuesday Morning/
> Afternoon and Thursday Morning/Afternoon to come as a team to get your
> photo taken. Slots are only ten minutes so its *important that everyone
> be on time*! If you are unable to view/edit the spreadsheet let me know
> and I will try to get you access or can fill in a slot for you.
>
> The location we are taking the photos on the 3rd floor in the prefunction
> space in front of the Grand Ballroom (across the hall from Fandangles).
>
> See you next week!
>
> Thanks,
>
> -Kendall Nelson (diablo_rojo)
>
> [1]
> https://docs.google.com/spreadsheets/d/1bgwMDsUm37JgpksUJszoDWcoBMHciufXcGV3OYe5A-4/edit?usp=sharing
>
>
> __
>
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [picasso] First meeting on 7th of March

2017-03-01 Thread Emilien Macchi
On Tue, Feb 28, 2017 at 12:30 PM, Derek Schultz  wrote:
> Hello all,
>
> The Picasso team will be running our first meeting next Tuesday. All those
> interested in the project are welcome!
>
> For those of you not familiar with Picasso, it provides a platform for
> Functions as a Service (FaaS) on OpenStack [1].
>
> Tuesday, March 7th, 2017 Meeting Agenda:
> Starting at UTC 18:00
>
> 1. From Python to Go. (What Picasso needs from IronFunctions to implement
> multi-tenancy)
> 2. Blueprints [2]
> 3. Figure out best time slot for future meetings.
> 4. Roadmap discussion.
>
> How to join:
> http://slack.iron.io in the #openstack channel

I would recommend using IRC for consistency with other projects.
Nothing forces you to do so, unless you plan to apply to the Big Tent.
My personal recommendation would be to use IRC so you can get more
visibility, since most of OpenStack folks are on IRC (and not always
on Slack).

Good luck for your first meeting!

> Etherpad:  https://etherpad.openstack.org/p/picasso-first-meeting
>
> [1] https://wiki.openstack.org/wiki/Picasso
> [2] https://blueprints.launchpad.net/picasso
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Boot from Volume meeting?

2017-03-01 Thread Loo, Ruby
Hi Julia,

Thanks for asking!

I'm agnostic about whether the BFV meeting should use the same day/time as the 
ironic-neutron meeting was. As long as people who are/will be attending this 
meeting are fine with the date/time, I'm fine too :)

I would actually prefer that the date/times are chosen to accommodate the folks 
that intend on attending those meetings. I'm concerned about keeping/having a 
'generic' meeting time that doesn't work that well for the same folks all the 
time. If we 'set this in stone', new people may not feel like they could ask to 
change the day/time :-(

I did not take any poll so I may be bringing something up that no one cares 
about. E.g., I'm totally fine with the proposed date/time but I don't know if 
I'll be attending this or future meetings :)

--ruby

From: Julia Kreger 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, February 28, 2017 at 9:42 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [ironic] Boot from Volume meeting?

Greetings fellow ironic humanoids!

As many have known, I've been largely attempting to drive Boot from
Volume functionality in ironic over the past two years.  Largely, in a
slow incremental approach, which is in part due to how I perceived it
to best fit into the existing priorities when the discussions started.

During PTG there was quite the interest by multiple people to become
involved and attempt to further Boot from Volume forward this cycle. I
would like to move to having a weekly meeting with the focus of
integrating this functionality into ironic, much like we did with the
tighter neutron integration.

I have two reasons for proposing a new meeting:

* Detailed technical status updates and planning/co-ordination would
need to take place. This would functionally be noise to a large number
of contributors in the ironic community.

* Many of these details would need need to be worked out prior to the
first part of the existing ironic meeting for the weekly status
update. The update being a summary of the status of each sub team.

With that having been said, I'm curious if we could re-use the
ironic-neutron meeting time slot [0] for this effort.  That meeting
was cancelled just after the first of this year [1].  In it's place I
think we should have a general purpose integration meeting, that could
be used as a standing meeting, specifically reserved at this time for
Boot from Volume work, but could be also by any integration effort
that needs time to sync-up in advance of the existing meeting.

-Julia

[0] 
http://git.openstack.org/cgit/openstack-infra/irc-meetings/tree/meetings/ironic-neutron-integration-meeting.yaml
[1] http://lists.openstack.org/pipermail/openstack-dev/2017-January/109536.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [relmgt] Pike PTG recap

2017-03-01 Thread Thierry Carrez
Hi everyone,

The release management team gathered at the PTG on Monday afternoon,
then together with the stable and requirements teams on Tuesday
afternoon. Most of the time was spent reviewing the Pike plan and
priorities. In case you missed it, the Pike schedule was made official
and published at:

https://releases.openstack.org/pike/schedule.html

Release process changes
---

* The high priority for Pike is to fix the aclmanager tool (which we use
to set pre-release ACLs on stable/$series branches) so that it supports
editing the existing ACL in place rather than just adding a new one.

* Another Pike priority is to adjust deadlines for cycle-trailing
deliverables, so that their Feature Freeze is around RC1 time, and their
RC1 around release. The current setup (where only the final release is
moved two weeks after) prevents them from integrating late changes.

* We plan to create a "new project checklist" for things projects need
to do to align with release management when they join the big tent.

* In terms of release tracking, we decided we want the releases.o.o
website to cover all cycle-based deliverables, which means that they
should all go through the releases repository (that should already be
the case). For "independent" projects, we'd like the site to contain
accurate information about a deliverable, or no information at all.
Therefore projects that don't keep that information up to date will be
removed in order to avoid publishing wrong data.

* Minor changes in the process will be introduced to reduce the risk of
projects and libraries having no release come milestone-3, to encourage
libraries with versions < 1.0 to move to 1.0+, and better document which
changes are acceptable in pre-release branches.

Automation changes
--

* Release automation is now mostly in place. The priority here is to
port all release tools and automation to work under Python 3.

* Other changes include fixing the step in the branch script that tries
to fix the upper constraints url (and does not work on every repo), do
not propose contraint updates for pre-release versions, improve
validation for NPM projects, and allow stable branches for tagless
projects like devstack or grenade.

Other
-

* Priority is to consolidate release tools that use the releases
repository data in the releases repository (rather than spread them over
two repos)

* Beyond the upcoming 2.1 release, Reno will be mostly in maintenance
mode. The main missing feature would be to find a way to include release
notes in tarballs, but the exact approach is still being defined (and
may be solved at CI level rather than in-reno).

* We'll also investigate more deeply what "releasing" would look like
for Go artifacts (beyond distributing signed source code tarballs).


That is all I remember and could extract from the notes. Other items
looked like they belonged to the stable and requirements teams. If I
missed anything significant let me know :)

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][keystone] Pike PTG recap - quotas

2017-03-01 Thread Lance Bragstad
FWIW - There was a lengthy discussion in #openstack-dev yesterday regarding
this [0].


[0]
http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2017-02-28.log.html#t2017-02-28T17:39:48

On Wed, Mar 1, 2017 at 5:42 AM, John Garbutt  wrote:

> On 27 February 2017 at 21:18, Matt Riedemann  wrote:
> > We talked about a few things related to quotas at the PTG, some in
> > cross-project sessions earlier in the week and then some on Wednesday
> > morning in the Nova room. The full etherpad is here [1].
> >
> > Counting quotas
> > ---
> >
> > Melanie hit a problem with the counting quotas work in Ocata with
> respect to
> > how to handle quotas when the cell that an instance is running in is
> down.
> > The proposed solution is to track project/user ID information in the
> > "allocations" table in the Placement service so that we can get
> allocation
> > information for quota usage from Placement rather than the cell. That
> should
> > be a relatively simple change to move this forward and hopefully get the
> > counting quotas patches merged by p-1 so we have plenty of burn-in time
> for
> > the new quotas code.
> >
> > Centralizing limits in Keystone
> > ---
> >
> > This actually came up mostly during the hierarchical quotas discussion on
> > Tuesday which was a cross-project session. The etherpad for that is here
> > [2]. The idea here is that Keystone already knows about the project
> > hierarchy and can be a central location for resource limits so that the
> > various projects, like nova and cinder, don't have to have a similar data
> > model and API for limits, we can just make that common in Keystone. The
> > other projects would still track resource usage and calculate when a
> request
> > is over the limit, but the hope is that the calculation and enforcement
> can
> > be generalized so we don't have to implement the same thing in all of the
> > projects for calculating when something is over quota.
> >
> > There is quite a bit of detail in the nova etherpad [1] about overbooking
> > and enforcement modes, which will need to be brought up as options in a
> spec
> > and then projects can sort out what makes the most sense (there might be
> > multiple enforcement models available).
> >
> > We still have to figure out the data migration plan to get limits data
> from
> > each project into Keystone, and what the API in Keystone is going to look
> > like, including what this looks like when you have multiple compute
> > endpoints in the service catalog, or regions, for example.
> >
> > Sean Dague was going to start working on the spec for this.
> >
> > Hierarchical quota support
> > --
> >
> > The notes on hierarchical quota support are already in [1] and [2]. We
> > agreed to not try and support hierarchical quotas in Nova until we were
> > using limits from Keystone so that we can avoid the complexity of both
> > systems (limits from Nova and limits from Keystone) in the same API
> code. We
> > also agreed to not block the counting quotas work that melwitt is doing
> > since that's already valuable on its own. It's also fair to say that
> > hierarchical quota support in Nova is a Queens item at the earliest
> given we
> > have to get limits stored in Keystone in Pike first.
> >
> > Dealing with the os-qouta-class-sets API
> > 
> >
> > I had a spec [3] proposing to cleanup some issues with the
> > os-quota-class-sets API in Nova. We agreed that rather than spend time
> > fixing the latent issues in that API, we'd just invest that time in
> storing
> > and getting limits from Keystone, after which we'll revisit deprecating
> the
> > quota classes API in Nova.
> >
> > [1] https://etherpad.openstack.org/p/nova-ptg-pike-quotas
> > [2] https://etherpad.openstack.org/p/ptg-hierarchical-quotas
> > [3] https://review.openstack.org/#/c/411035/
>
> I started a quota backlog spec before the PTG to collect my thoughts here:
> https://review.openstack.org/#/c/429678
>
> I have updated that post summit to include updated details on
> hierarchy (ln134) when using keystone to store the limits. This mostly
> came from some side discussions in the API-WG room with morgan and
> melwitt.
>
> It includes a small discussion on how the idea behind quota-class-sets
> could be turned into something usable, although that is now a problem
> for keystone's limits API.
>
> There were some side discussion around the move to placement meaning
> ironic quotas move from vCPU and RAM to custom resource classes. Its
> worth noting this largely supersedes the ideas we discussed here in
> flavor classes:
> http://specs.openstack.org/openstack/nova-specs/specs/
> backlog/approved/flavor-class.html
>
> I don't currently plan on taking that backlog spec further, as sdague
> is going to take moving this all forward.
>
> Thanks,
> John
>
> 

Re: [openstack-dev] [all] PBR 2.0.0 release *may* cause gate failures

2017-03-01 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2017-03-01 07:57:31 -0500:
> On 03/01/2017 12:26 AM, Tony Breeds wrote:
> > Hi All,
> > Earlier today the release team tagged PBR 2.0.0.  The reason for the 
> > major
> > version bump is because warnerrors has been removed in favor of
> > warning-is-error from sphinx >= 1.5.0.
> > 
> > It seems that several projects outside both inside and outside OpenStack 
> > have
> > capped pbr <2.0.0 so we can't actually use this release yet.  The 
> > requirements
> > team will work with all projects to remove the cap of pbr in those projects.
> > 
> > The good news is that projects using upper-constraints.txt are insulated 
> > from
> > this and shouldn't be affected[1].  However upper-constraints.txt isn't 
> > being used
> > by all projects and *those* projects will start seeing
> > 
> > ContextualVersionConflicts: (pbr 2.0.0 in gate logs.  It's recommended that
> > those projects add a local ban for pbr and associate it with:
> > https://bugs.launchpad.net/openstack-requirements/+bug/1668848
> > 
> > Then once the situation is resolved we can unwind and remove the temporary 
> > caps.
> > 
> > Yours Tony.
> > 
> > [1] There is at least 1 corner case where the coverage job installed 
> > directly
> > from a git URL and therefore that wasn't protected.
> 
> So, I feel like we hit a similar issue around Vancouver with a pbr bump.
> Can we stop capping pbr per rule now?

Tony identified caps in 5 OpenStack community projects (see [1]) as well
as powervm and python-jsonpath-rw-ext. Pull requests to those other
projects are linked from the bug [2].

The sqlalchemy-migrate 0.11.0 release should fix that library. The
release team will prioritize releases for the other dependencies today
as they come in.

Doug

[1] https://review.openstack.org/#/q/topic:bug/1668848
[2] https://bugs.launchpad.net/openstack-requirements/+bug/1668848

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Boot from Volume meeting?

2017-03-01 Thread Dmitry Tantsur

Thanks for writing this!

On 02/28/2017 03:42 PM, Julia Kreger wrote:

Greetings fellow ironic humanoids!

As many have known, I've been largely attempting to drive Boot from
Volume functionality in ironic over the past two years.  Largely, in a
slow incremental approach, which is in part due to how I perceived it
to best fit into the existing priorities when the discussions started.

During PTG there was quite the interest by multiple people to become
involved and attempt to further Boot from Volume forward this cycle. I
would like to move to having a weekly meeting with the focus of
integrating this functionality into ironic, much like we did with the
tighter neutron integration.


+1



I have two reasons for proposing a new meeting:

* Detailed technical status updates and planning/co-ordination would
need to take place. This would functionally be noise to a large number
of contributors in the ironic community.

* Many of these details would need need to be worked out prior to the
first part of the existing ironic meeting for the weekly status
update. The update being a summary of the status of each sub team.

With that having been said, I'm curious if we could re-use the
ironic-neutron meeting time slot [0] for this effort.  That meeting
was cancelled just after the first of this year [1].  In it's place I
think we should have a general purpose integration meeting, that could
be used as a standing meeting, specifically reserved at this time for
Boot from Volume work, but could be also by any integration effort
that needs time to sync-up in advance of the existing meeting.


I'm in favor of a generic meeting. I'm not sure it's worth taking the whole 1 
hour slot though. I think that the BFV one might only take half of slot, where 
the second half maybe taken by another integration subteam.


Speaking of which, I propose the networking subteam to start (continue?) having 
their own meeting as well. There is a lot of stuff going on that is hard to 
catch up with.


WDYT?



-Julia

[0] 
http://git.openstack.org/cgit/openstack-infra/irc-meetings/tree/meetings/ironic-neutron-integration-meeting.yaml
[1] http://lists.openstack.org/pipermail/openstack-dev/2017-January/109536.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] Eventlet verion bump coming?

2017-03-01 Thread Roman Podoliaka
Hi Tony,

I'm ready to help with this!

The version we use now (0.19.0) has (at least) 2 known issues:

- recv_into() >8kb from an SSL wrapped socket hangs [1]
- adjusting of system clock backwards makes periodic tasks hang [2]

so it'd be great to allow for newer releases in upper-constraints.

Thanks,
Roman

[1] https://github.com/eventlet/eventlet/issues/315
[2] https://review.openstack.org/#/c/434327/

On Tue, Feb 14, 2017 at 6:57 AM, Tony Breeds  wrote:
> Hi All,
> So there is a new version of eventlet out and we refused to bump it late 
> in
> the ocata cycle but now that we're early in the pike cycle I think we're okay
> to do it.  The last time[1] we tried to bump eventlet it was pretty rocky and 
> we
> decided that we'd need a short term group of people focused on testing the new
> bump rather than go through the slightly painful:
>
>  1: Bump eventlet version
>  2: Find and file bugs
>  3: Revert
>  4: Wait for next release
>  goto 1
>
> process.  So can we get a few people together to map this out?  I'd like to 
> try it
> shortly after the PTG?
>
> From an implementation POV I'd like to bump the upper-constraint and let that
> sit for a while before we touch global-requirements.txt
>
> Youre Tony.
>
> [1] 
> http://lists.openstack.org/pipermail/openstack-dev/2016-February/thread.html#86745
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PBR 2.0.0 release *may* cause gate failures

2017-03-01 Thread Hayes, Graham
On 01/03/2017 13:00, Sean Dague wrote:
> On 03/01/2017 12:26 AM, Tony Breeds wrote:
>> Hi All,
>> Earlier today the release team tagged PBR 2.0.0.  The reason for the 
>> major
>> version bump is because warnerrors has been removed in favor of
>> warning-is-error from sphinx >= 1.5.0.
>>
>> It seems that several projects outside both inside and outside OpenStack have
>> capped pbr <2.0.0 so we can't actually use this release yet.  The 
>> requirements
>> team will work with all projects to remove the cap of pbr in those projects.
>>
>> The good news is that projects using upper-constraints.txt are insulated from
>> this and shouldn't be affected[1].  However upper-constraints.txt isn't 
>> being used
>> by all projects and *those* projects will start seeing
>>
>> ContextualVersionConflicts: (pbr 2.0.0 in gate logs.  It's recommended that
>> those projects add a local ban for pbr and associate it with:
>> https://bugs.launchpad.net/openstack-requirements/+bug/1668848
>>
>> Then once the situation is resolved we can unwind and remove the temporary 
>> caps.
>>
>> Yours Tony.
>>
>> [1] There is at least 1 corner case where the coverage job installed directly
>> from a git URL and therefore that wasn't protected.
>
> So, I feel like we hit a similar issue around Vancouver with a pbr bump.
> Can we stop capping pbr per rule now?
>
> I also wonder if we can grant the release team +2 permissions on
> everything in OpenStack so that fixes like this can be gotten in quickly
> without having to go chase a bunch of teams.

That sounds like a good idea to me.

>   -Sean
>


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Forum Brainstorming - Boston 2017

2017-03-01 Thread Emilien Macchi
Hi everyone,

We need to start brainstorming topics we'd like to discuss with the
rest of the community during our first "Forum" at the OpenStack Summit
in Boston. The event should look a lot like the cross-project
workshops or the Ops day at the old Design Summit : open, timeboxed
discussions in a fishbowl setting facilitated by a moderator (no
speaker or formal presentations). We'll gather feedback from users and
operators on the Ocata release, and start gathering early requirements
and priorities for the Queens release cycle. We aim to ensure the
broadest coverage of topics that will allow for all parts of our
community (upstream contributors, ops working groups, application
developers...) getting together to discuss (and get alignment on) key
areas within our community/projects.

Examples of the types of discussions and some sessions that might fit
within each one:

# Strategic, whole-of-community discussions:
To think about the big picture, including beyond just one release
cycle and new technologies. An example could be "Making OpenStack One
Platform for containers/VMs/Bare Metal", where the entire community
congregates to share opinions on how to make OpenStack achieve its
integration engine goal.

# Cross-project sessions:
In a similar vein to what has happened at past design summits, but
with increased emphasis on issues that are relevant to all areas of
the community. An example could be "Rolling Upgrades at Scale", where
the Large Deployments Team collaborates with Nova, Cinder and Keystone
to tackle issues that come up with rolling upgrades when there’s a
large number of machines.

# Project-specific sessions:
Where developers can ask users specific questions about their
experience, users can provide feedback from the last release and
cross-community collaboration on the priorities, and ‘blue sky’ ideas
for the next release.An example could be "Neutron Pain Points",
co-organized by neutron developers and users, where Neutron developers
bring some specific questions they want answered, Neutron users bring
feedback from the latest release and ideas about the future.


There are two stages to the brainstorming:

1. Starting today, set up an etherpad with your group/team, or use one
on the list and start discussing ideas you'd like to talk about at the
Forum. Then, through +1s on etherpads and mailing list discussion,
work out which ones are the most needed.
2. Then, in a couple of weeks, we will open up a more formal web-based
tool for submission of abstracts that came out of the brainstorming on
top. A committee with TC, UC and Foundation staff members will work on
the final selection and scheduling.

We expect working groups may make their own etherpads, however the
Technical Committee offers one for cross-project and strategic topics:
https://etherpad.openstack.org/p/BOS-TC-brainstorming

Feel free to use that, or make one for your group and add it to the list at:
https://wiki.openstack.org/wiki/Forum/Boston2017

Thanks,
Emilien and Thierry, on behalf of the Technical Committee

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][blazar][ceilometer][congress][intel-nfv-ci-tests][ironic][manila][networking-bgpvpn][networking-fortinet][networking-sfc][neutron][neutron-fwaas][neutron-lbaas][nova-lxd][octa

2017-03-01 Thread Andrea Frittoli
On Wed, Mar 1, 2017 at 2:21 AM Takashi Yamamoto 
wrote:

> hi,
>
> On Mon, Feb 27, 2017 at 8:34 PM, Andrea Frittoli
>  wrote:
> > Hello folks,
> >
> > TL;DR: if today you import manager,py from tempest.scenario please
> maintain
> > a copy of [0] in tree until further notice.
> >
> > Full message:
> > --
> >
> > One of the priorities for the QA team in the Pike cycle is to refactor
> > scenario tests to a sane code base [1].
> >
> > As they are now, changes to scenario tests are difficult to develop and
> > review, and failures in those tests are hard to debug, which is in many
> > directions far away from where we need to be.
> >
> > The issue we face is that, even though tempest.scenario.manager is not
> > advertised as a stable interface in Tempest, many project use it today
> for
> > convenience in writing their own tests. We don't know about dependencies
> > outside of the OpenStack ecosystem, but we want to try to make this
> refactor
> > a smooth experience for our uses in OpenStack, and avoid painful gate
> > breakages as much as possible.
> >
> > The process we're proposing is as follows:
> > - hold a copy of [0] in tree - in most cases you won't even have to
> change
> > your imports as a lot of projects use tempest/scenario in their code
> base.
> > You may decide to include the bare minimum you need from that module
> instead
> > of all of it. It's a bit more work to make the patch, but less un-used
> code
> > lying around afterwards.
>
> i submitted patches for a few repos.
>
> https://review.openstack.org/#/q/status:open++branch:master+topic:tempest-manager
> i'd suggest to use the same gerrit topic for relevant patches.
>
> Thank you for looking into this!
Having a common gerrit topic is a nice idea: "tempest-manager"

I'm also tracking patches in this etherpad:
https://etherpad.openstack.org/p/tempest-manager-plugins

andrea

> - the QA team will refactor scenario tests, and make more interfaces
> stable
> > (test.py, credential providers). We won't advertise every single change
> in
> > this process, only when we start and once we're done.
> > - you may decide to discard your local copy of manager.py and consume
> > Tempest stable interfaces directly. We will help with any question you
> may
> > have on the process and on Tempest interfaces.
> >
> > Repositories affected by the refactor are (based on [2]):
> >
> >
> blazar,ceilometer,congress,intel-nfv-ci-tests,ironic,manila,networking-bgpvpn,networking-fortinet,networking-sfc,neutron-fwaas,neutron-lbaas,nova-lxd,octavia,sahara-tests,tap-as-a-service,tempest-horizon,vmware-nsx,watcher
> >
> > If we don't hear from a team at all in the next two weeks, we will assume
> > that the corresponding Tempest plugin / bunch of tests is not in use
> > anymore, and ignore it. If you use tempest.scenario.manager.py today and
> > your repo is not on the list, please let us know!
> >
> > I'm happy to propose an initial patch for any team that may require it -
> > just ping me on IRC (andreaf).
> > I won't have the bandwidth myself to babysit each patch through review
> and
> > gate though.
> >
> > Thank you for your cooperation and patience!
> >
> > Andrea
> >
> > [0]
> >
> http://git.openstack.org/cgit/openstack/tempest/tree/tempest/scenario/manager.py
> > [1] https://etherpad.openstack.org/p/pike-qa-priorities
> > [2]
> >
> https://github.com/andreafrittoli/tempest_stable_interfaces/blob/master/data/get_deps.sh
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] PTG Team Photos!

2017-03-01 Thread Telles Nobrega
Hi Kendall,

can you send me the Sahara team photo?

Thanks

On Tue, Feb 21, 2017 at 11:43 AM Kendall Nelson 
wrote:

> To be a little more specific about the location. It's just outside the
> Grand ballroom A. Close to the top of the staircase.
>
> - Kendall Nelson (diablo_rojo)
>
>
> On Wed, Feb 15, 2017, 1:24 PM Kendall Nelson 
> wrote:
>
> Hello All!
>
> We are excited to see you next week at the PTG and wanted to share
> that we will be taking team photos! Provided is a google sheet signup for
> the available time slots [1]. We will be providing time on Tuesday Morning/
> Afternoon and Thursday Morning/Afternoon to come as a team to get your
> photo taken. Slots are only ten minutes so its *important that everyone
> be on time*! If you are unable to view/edit the spreadsheet let me know
> and I will try to get you access or can fill in a slot for you.
>
> The location we are taking the photos on the 3rd floor in the prefunction
> space in front of the Grand Ballroom (across the hall from Fandangles).
>
> See you next week!
>
> Thanks,
>
> -Kendall Nelson (diablo_rojo)
>
> [1]
> https://docs.google.com/spreadsheets/d/1bgwMDsUm37JgpksUJszoDWcoBMHciufXcGV3OYe5A-4/edit?usp=sharing
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA]Refactoring Scenarios manager.py

2017-03-01 Thread Sean Dague
On 03/01/2017 08:03 AM, Andrea Frittoli wrote:
> 
> 
> On Wed, Mar 1, 2017 at 12:54 PM Sean Dague  > wrote:
> 
> On 03/01/2017 07:35 AM, Jordan Pittier wrote:
> >
> >
> > On Wed, Mar 1, 2017 at 3:57 AM, Ghanshyam Mann
> 
> > >>
> wrote:
> >
> > Doing gradual refactoring and fixing plugins time to time
> needs lot
> > of wait and sync.
> >
> > That needs:
> > 1. Plugins to switch from current method usage. Plugins to
> have some
> > other function or same copy paste code what current scenario base
> > class has.
> > 2. Tempest patch to wait for plugin fix.
> > 3. Plugins to switch back to stable interface once Tempest
> going to
> > provide those.
> >
> > This needs lot of sync between tempest and plugins and we have to
> > wait for tempest refactoring patch for long.
> >
> > To make it more efficient, how about this:
> > 1. Keep the scenario manger copy in tempest as it is. for plugins
> > usage only.
> >
> > Given that the refactoring effort "started" a year ago, at the current
> > speed it may take 2 years to complete. In the mean time we will have a
> > massive code duplication and a maintenance burden.
> >
> > 2. Start refactoring the scenario framework by adding more and
> more
> > helper function under /common or lib.
> >
> > Starting a "framework" (each time I see that word, I have a bad
> feeling)
> > from scratch without users and usage is very very difficult. How do we
> > know what we need in that framework and what will be actually used in
> > tests ?
> 
> I'm with Jordan, honestly, common functions and inheritance aren't going
> to help anyway. My feeling is that the only way to sanely take things
> forward is to start pivoting towards fixtures. And basically error if
> inheritance is used in tests. The functions on base classes is what is
> causing all the pain of rewind, because it's not a contract, but it's
> also magically there, so people try to use it.
> 
> This would mean that every test would have more setUp with a bunch of
> explicit fixtures, but that's fine, it's a little more time thinking
> through the writing of the tests, and a lot easier to read them to know
> what is going on, and to delete them later. Basically get rid of all
> class/superclass magic.
> 
> > The effort was called scenario refactoring and I think that's what we
> > should do. We should not do "start from scratch scenarios" or
> "copy all
> > the code and see what happens".
> >
> > There's no problem with plugins. We committed to have a stable
> interface
> > which is documented and agreed upon. It's clearly written
> > here
> 
> https://docs.openstack.org/developer/tempest/plugin.html#stable-tempest-apis-plugins-may-use
> > The rest is private, for Tempest internal use. If Tempest cores
> disagree
> > with that, then we should first of all put a spec and rewrite that
> > document.
> 
> It does seem like there is a tough love moment coming about the fact
> that APIs were consumed that were specifically out of bounds. Either the
> team commits to them inbounds, and there is the contract, or moves
> forward under the existing contract.
> 
> Though, I expect the issue is that because all the Tempest tests work in
> this super class way, this is the only stuff people understood and copy
> pasted. So it does feel like there has to be at least a couple of
> instances in the Tempest tree of how the team *wants* other projects to
> write tests that they can point people to.
> 
> 
> +1 on this. Which is why we need to refactor our scenario test to a sane
> code
> base that people can use as examples to copy paste.
> 
> The basic credentials provider services that comes with test.py is something
> that is useful to have I think - and we are working on make them a
> contract people
> can rely on. 
> 
> We had a discussion at the PTG about how to deal with those, and the
> conclusion
> was that we are going to move credentials providers to tempest.lib and keep 
> test.py more or less as it is and declare it a stable interface for plugins.
> 
> Alternatives like dropping class fixtures for good, and move everything
> to be test
> fixtures, involve too much code churn / refactor for an advantage which 
> is too little - and we wouldn't have the bandwidth to do that anyways.

So, as someone who is only an infrequence Tempest contributor now, I
personally have a pretty hard time with the magic credentials, because
it's super non obvious to me what a new test would give me off a blank
page, or how I would 

Re: [openstack-dev] [QA]Refactoring Scenarios manager.py

2017-03-01 Thread Luigi Toscano
On Wednesday, 1 March 2017 13:35:26 CET Jordan Pittier wrote:
> On Wed, Mar 1, 2017 at 3:57 AM, Ghanshyam Mann 
> 
> > 2. Start refactoring the scenario framework by adding more and more helper
> > function under /common or lib.
> 
> Starting a "framework" (each time I see that word, I have a bad feeling)
> from scratch without users and usage is very very difficult. How do we know
> what we need in that framework and what will be actually used in tests ?
> 
> The effort was called scenario refactoring and I think that's what we
> should do. We should not do "start from scratch scenarios" or "copy all the
> code and see what happens".
> 
> There's no problem with plugins. We committed to have a stable interface
> which is documented and agreed upon. It's clearly written here
> https://docs.openstack.org/developer/tempest/plugin.html#stable-tempest-apis
> -plugins-may-use The rest is private, for Tempest internal use. If Tempest
> cores disagree with that, then we should first of all put a spec and
> rewrite that document.

All correct, except that some needed functions are only available as unstable 
interface. So between not implementing a plugins and wait until all the 
interfaces are available, or implementing something (so you know what people 
needs, as you pointed out), the choice was to implement something with some 
private interface. The transition can be handled gracefully for everyone and 
going full-rage againt plugin maintainer is not really community-friendly. 
Provide a transition path and people will follow (and if not, they have been 
warned). The transition path was missing so far.

I approve Andrea's proposal.

-- 
Luigi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA]Refactoring Scenarios manager.py

2017-03-01 Thread Andrea Frittoli
On Wed, Mar 1, 2017 at 12:54 PM Sean Dague  wrote:

> On 03/01/2017 07:35 AM, Jordan Pittier wrote:
> >
> >
> > On Wed, Mar 1, 2017 at 3:57 AM, Ghanshyam Mann  > > wrote:
> >
> > Doing gradual refactoring and fixing plugins time to time needs lot
> > of wait and sync.
> >
> > That needs:
> > 1. Plugins to switch from current method usage. Plugins to have some
> > other function or same copy paste code what current scenario base
> > class has.
> > 2. Tempest patch to wait for plugin fix.
> > 3. Plugins to switch back to stable interface once Tempest going to
> > provide those.
> >
> > This needs lot of sync between tempest and plugins and we have to
> > wait for tempest refactoring patch for long.
> >
> > To make it more efficient, how about this:
> > 1. Keep the scenario manger copy in tempest as it is. for plugins
> > usage only.
> >
> > Given that the refactoring effort "started" a year ago, at the current
> > speed it may take 2 years to complete. In the mean time we will have a
> > massive code duplication and a maintenance burden.
> >
> > 2. Start refactoring the scenario framework by adding more and more
> > helper function under /common or lib.
> >
> > Starting a "framework" (each time I see that word, I have a bad feeling)
> > from scratch without users and usage is very very difficult. How do we
> > know what we need in that framework and what will be actually used in
> > tests ?
>
> I'm with Jordan, honestly, common functions and inheritance aren't going
> to help anyway. My feeling is that the only way to sanely take things
> forward is to start pivoting towards fixtures. And basically error if
> inheritance is used in tests. The functions on base classes is what is
> causing all the pain of rewind, because it's not a contract, but it's
> also magically there, so people try to use it.
>
> This would mean that every test would have more setUp with a bunch of
> explicit fixtures, but that's fine, it's a little more time thinking
> through the writing of the tests, and a lot easier to read them to know
> what is going on, and to delete them later. Basically get rid of all
> class/superclass magic.
>
> > The effort was called scenario refactoring and I think that's what we
> > should do. We should not do "start from scratch scenarios" or "copy all
> > the code and see what happens".
> >
> > There's no problem with plugins. We committed to have a stable interface
> > which is documented and agreed upon. It's clearly written
> > here
> https://docs.openstack.org/developer/tempest/plugin.html#stable-tempest-apis-plugins-may-use
> > The rest is private, for Tempest internal use. If Tempest cores disagree
> > with that, then we should first of all put a spec and rewrite that
> > document.
>
> It does seem like there is a tough love moment coming about the fact
> that APIs were consumed that were specifically out of bounds. Either the
> team commits to them inbounds, and there is the contract, or moves
> forward under the existing contract.
>
> Though, I expect the issue is that because all the Tempest tests work in
> this super class way, this is the only stuff people understood and copy
> pasted. So it does feel like there has to be at least a couple of
> instances in the Tempest tree of how the team *wants* other projects to
> write tests that they can point people to.
>

+1 on this. Which is why we need to refactor our scenario test to a sane
code
base that people can use as examples to copy paste.

The basic credentials provider services that comes with test.py is something
that is useful to have I think - and we are working on make them a contract
people
can rely on.

We had a discussion at the PTG about how to deal with those, and the
conclusion
was that we are going to move credentials providers to tempest.lib and keep
test.py more or less as it is and declare it a stable interface for plugins.

Alternatives like dropping class fixtures for good, and move everything to
be test
fixtures, involve too much code churn / refactor for an advantage which
is too little - and we wouldn't have the bandwidth to do that anyways.


>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PBR 2.0.0 release *may* cause gate failures

2017-03-01 Thread Sean Dague
On 03/01/2017 12:26 AM, Tony Breeds wrote:
> Hi All,
> Earlier today the release team tagged PBR 2.0.0.  The reason for the major
> version bump is because warnerrors has been removed in favor of
> warning-is-error from sphinx >= 1.5.0.
> 
> It seems that several projects outside both inside and outside OpenStack have
> capped pbr <2.0.0 so we can't actually use this release yet.  The requirements
> team will work with all projects to remove the cap of pbr in those projects.
> 
> The good news is that projects using upper-constraints.txt are insulated from
> this and shouldn't be affected[1].  However upper-constraints.txt isn't being 
> used
> by all projects and *those* projects will start seeing
> 
> ContextualVersionConflicts: (pbr 2.0.0 in gate logs.  It's recommended that
> those projects add a local ban for pbr and associate it with:
> https://bugs.launchpad.net/openstack-requirements/+bug/1668848
> 
> Then once the situation is resolved we can unwind and remove the temporary 
> caps.
> 
> Yours Tony.
> 
> [1] There is at least 1 corner case where the coverage job installed directly
> from a git URL and therefore that wasn't protected.

So, I feel like we hit a similar issue around Vancouver with a pbr bump.
Can we stop capping pbr per rule now?

I also wonder if we can grant the release team +2 permissions on
everything in OpenStack so that fixes like this can be gotten in quickly
without having to go chase a bunch of teams.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA]Refactoring Scenarios manager.py

2017-03-01 Thread Andrea Frittoli
On Wed, Mar 1, 2017 at 12:39 PM Jordan Pittier 
wrote:

> On Wed, Mar 1, 2017 at 3:57 AM, Ghanshyam Mann 
> wrote:
>
> Doing gradual refactoring and fixing plugins time to time needs lot of
> wait and sync.
>
> That needs:
> 1. Plugins to switch from current method usage. Plugins to have some other
> function or same copy paste code what current scenario base class has.
> 2. Tempest patch to wait for plugin fix.
> 3. Plugins to switch back to stable interface once Tempest going to
> provide those.
>
> This needs lot of sync between tempest and plugins and we have to wait for
> tempest refactoring patch for long.
>
> To make it more efficient, how about this:
> 1. Keep the scenario manger copy in tempest as it is. for plugins usage
> only.
>
> Given that the refactoring effort "started" a year ago, at the current
> speed it may take 2 years to complete. In the mean time we will have a
> massive code duplication and a maintenance burden.
>
> 2. Start refactoring the scenario framework by adding more and more helper
> function under /common or lib.
>
> Starting a "framework" (each time I see that word, I have a bad feeling)
> from scratch without users and usage is very very difficult. How do we know
> what we need in that framework and what will be actually used in tests ?
>

Yeah +1 on that - we need to refactor scenario to fix debuggability,
maintainability and ability to get contributions.
Moving helpers to lib is an option, but it's a lower priority in my view.


>
> The effort was called scenario refactoring and I think that's what we
> should do. We should not do "start from scratch scenarios" or "copy all the
> code and see what happens".
>
> There's no problem with plugins. We committed to have a stable interface
> which is documented and agreed upon. It's clearly written here
> https://docs.openstack.org/developer/tempest/plugin.html#stable-tempest-apis-plugins-may-use
> The rest is private, for Tempest internal use. If Tempest cores disagree
> with that, then we should first of all put a spec and rewrite that
> document.
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA]Refactoring Scenarios manager.py

2017-03-01 Thread Sean Dague
On 03/01/2017 07:35 AM, Jordan Pittier wrote:
> 
> 
> On Wed, Mar 1, 2017 at 3:57 AM, Ghanshyam Mann  > wrote:
> 
> Doing gradual refactoring and fixing plugins time to time needs lot
> of wait and sync.
> 
> That needs:
> 1. Plugins to switch from current method usage. Plugins to have some
> other function or same copy paste code what current scenario base
> class has.
> 2. Tempest patch to wait for plugin fix.
> 3. Plugins to switch back to stable interface once Tempest going to
> provide those. 
> 
> This needs lot of sync between tempest and plugins and we have to
> wait for tempest refactoring patch for long. 
> 
> To make it more efficient, how about this:
> 1. Keep the scenario manger copy in tempest as it is. for plugins
> usage only.
> 
> Given that the refactoring effort "started" a year ago, at the current
> speed it may take 2 years to complete. In the mean time we will have a
> massive code duplication and a maintenance burden. 
> 
> 2. Start refactoring the scenario framework by adding more and more
> helper function under /common or lib.
> 
> Starting a "framework" (each time I see that word, I have a bad feeling)
> from scratch without users and usage is very very difficult. How do we
> know what we need in that framework and what will be actually used in
> tests ?  

I'm with Jordan, honestly, common functions and inheritance aren't going
to help anyway. My feeling is that the only way to sanely take things
forward is to start pivoting towards fixtures. And basically error if
inheritance is used in tests. The functions on base classes is what is
causing all the pain of rewind, because it's not a contract, but it's
also magically there, so people try to use it.

This would mean that every test would have more setUp with a bunch of
explicit fixtures, but that's fine, it's a little more time thinking
through the writing of the tests, and a lot easier to read them to know
what is going on, and to delete them later. Basically get rid of all
class/superclass magic.

> The effort was called scenario refactoring and I think that's what we
> should do. We should not do "start from scratch scenarios" or "copy all
> the code and see what happens".
> 
> There's no problem with plugins. We committed to have a stable interface
> which is documented and agreed upon. It's clearly written
> here 
> https://docs.openstack.org/developer/tempest/plugin.html#stable-tempest-apis-plugins-may-use
> The rest is private, for Tempest internal use. If Tempest cores disagree
> with that, then we should first of all put a spec and rewrite that
> document. 

It does seem like there is a tough love moment coming about the fact
that APIs were consumed that were specifically out of bounds. Either the
team commits to them inbounds, and there is the contract, or moves
forward under the existing contract.

Though, I expect the issue is that because all the Tempest tests work in
this super class way, this is the only stuff people understood and copy
pasted. So it does feel like there has to be at least a couple of
instances in the Tempest tree of how the team *wants* other projects to
write tests that they can point people to.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA]Refactoring Scenarios manager.py

2017-03-01 Thread Jordan Pittier
On Wed, Mar 1, 2017 at 4:18 AM,  wrote:

> I think it a good solution, I already put +1 :)
>
>
> And, as to the scenario testcases, shall we:
>
> 1) remove test steps/checks already coverd in API test
>
Duplicate test steps/checks is not good and should be removed. It's not
related to the scenario refactoring effort, so please if you find
duplicated tests or test steps, we should remove them.

> 2) remove sequence test cases (such as test_server_sequence_suspend_resume),
> othersize scenario will get fatter and fatter
>
There's no definitive answer to that. We should just remember what a
scenario should be: test several openstack components, "real" world use
cases, "real" integration testing. Those should be our guideline for
scenarios. We should not buy into "I put it into the scenarios directory
because the helper methods were here and convenient" or "because I saw an
already existing scenario that look kind of the same".
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA]Refactoring Scenarios manager.py

2017-03-01 Thread Jordan Pittier
On Wed, Mar 1, 2017 at 3:57 AM, Ghanshyam Mann 
wrote:

> Doing gradual refactoring and fixing plugins time to time needs lot of
> wait and sync.
>
> That needs:
> 1. Plugins to switch from current method usage. Plugins to have some other
> function or same copy paste code what current scenario base class has.
> 2. Tempest patch to wait for plugin fix.
> 3. Plugins to switch back to stable interface once Tempest going to
> provide those.
>
> This needs lot of sync between tempest and plugins and we have to wait for
> tempest refactoring patch for long.
>
> To make it more efficient, how about this:
> 1. Keep the scenario manger copy in tempest as it is. for plugins usage
> only.
>
Given that the refactoring effort "started" a year ago, at the current
speed it may take 2 years to complete. In the mean time we will have a
massive code duplication and a maintenance burden.

> 2. Start refactoring the scenario framework by adding more and more helper
> function under /common or lib.
>
Starting a "framework" (each time I see that word, I have a bad feeling)
from scratch without users and usage is very very difficult. How do we know
what we need in that framework and what will be actually used in tests ?

The effort was called scenario refactoring and I think that's what we
should do. We should not do "start from scratch scenarios" or "copy all the
code and see what happens".

There's no problem with plugins. We committed to have a stable interface
which is documented and agreed upon. It's clearly written here
https://docs.openstack.org/developer/tempest/plugin.html#stable-tempest-apis-plugins-may-use
The rest is private, for Tempest internal use. If Tempest cores disagree
with that, then we should first of all put a spec and rewrite that
document.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Single core review for patch approval

2017-03-01 Thread Beth Elwell
Yep I totally agree that adding cores for the sake of adding cores doesn’t make 
sense. Just trying to fish for ideas to prevent having to go to a single +2 to 
merge as that does make me nervous. But I guess for the sake of moving code 
through it needs to be done at the moment.


> On 1 Mar 2017, at 11:53, Rob Cresswell  wrote:
> 
> Adding inexperienced cores doesn't really alleviate that issue though, and I 
> don't currently feel that there is anyone with the right balance of 
> experience and activity to be added to the core team.
> 
> Me and Richard monitor review activity very closely though, and we are 
> actively looking to grow the team. We just need more activity from reviewers 
> so that they can learn, and in turn we can teach them. I don't expect people 
> to know everything before being core - I certainly didn't - but I don't think 
> the bar is being met just yet.
> 
> Rob
> 
> On 1 March 2017 at 10:36, Beth Elwell  > wrote:
> Has there been any consideration of growing the core team to help with review 
> bandwidth? I ask only because that resulting responsibility to the community 
> can drive additional review activity. Just worried that only 1x +2 could 
> cause issues with code being  merged on a project this large that could 
> potentially break things or clash with other opinions or standards of how it 
> should be written/implemented? It concerns me that it makes it easier to 
> overlook larger things in more substantial patches. I guess as you say, there 
> needs to be accountability re not always going for the single +2 when the 
> patch is of that sort of size and you need a second opinion?
> 
> Beth
> 
> > On 28 Feb 2017, at 10:09, Rob Cresswell  > > wrote:
> >
> > Hey everyone,
> >
> > Horizon is moving to requiring only a single core review for code approval. 
> > Note that cores are not obliged to approve on a single +2; if a core would 
> > like a second opinion for patches that are complex or high risk, that is 
> > also fine.
> >
> > We still require at least one of the core reviewers or contributor on a 
> > patch to be from separate companies however. For example, if a patch is 
> > authored by someone from Cisco, then I could not (as a Cisco employee) +2+w 
> > the patch by myself; it would require at least another core +2.
> >
> > This should help us move smaller patches along quicker.
> >
> > Rob
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> > 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> > 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Single core review for patch approval

2017-03-01 Thread Rob Cresswell
Adding inexperienced cores doesn't really alleviate that issue though, and I 
don't currently feel that there is anyone with the right balance of experience 
and activity to be added to the core team.

Me and Richard monitor review activity very closely though, and we are actively 
looking to grow the team. We just need more activity from reviewers so that 
they can learn, and in turn we can teach them. I don't expect people to know 
everything before being core - I certainly didn't - but I don't think the bar 
is being met just yet.

Rob

On 1 March 2017 at 10:36, Beth Elwell 
> wrote:
Has there been any consideration of growing the core team to help with review 
bandwidth? I ask only because that resulting responsibility to the community 
can drive additional review activity. Just worried that only 1x +2 could cause 
issues with code being merged on a project this large that could potentially 
break things or clash with other opinions or standards of how it should be 
written/implemented? It concerns me that it makes it easier to overlook larger 
things in more substantial patches. I guess as you say, there needs to be 
accountability re not always going for the single +2 when the patch is of that 
sort of size and you need a second opinion?

Beth

> On 28 Feb 2017, at 10:09, Rob Cresswell 
> > wrote:
>
> Hey everyone,
>
> Horizon is moving to requiring only a single core review for code approval. 
> Note that cores are not obliged to approve on a single +2; if a core would 
> like a second opinion for patches that are complex or high risk, that is also 
> fine.
>
> We still require at least one of the core reviewers or contributor on a patch 
> to be from separate companies however. For example, if a patch is authored by 
> someone from Cisco, then I could not (as a Cisco employee) +2+w the patch by 
> myself; it would require at least another core +2.
>
> This should help us move smaller patches along quicker.
>
> Rob
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] gate jobs using or enabling *_ssh drivers

2017-03-01 Thread Pavlo Shchelokovskyy
Hi ironicers,

at the PTG we decided to remove the unsupported SSH drivers from ironic
code tree during Pike release. Below is an update on which CI jobs for
projects under Baremetal program governance do still use them and thus are
blocking us from removing these drivers.

tl;dr: 2 tempest-dsvm jobs + 2 grenade-dsvm jobs + 3 in bifrost + most just
enable some *_ssh driver

First of all, most of the gates running ironic service as part of DevStack
are at least enabling one of {agent,pxe}_ssh drivers due to current
defaults in openstack-infra/devstack-gate [0-1]. As our job builders in
project-config mostly do not set enabled drivers themselves (only
experimental standalone job does it), this must be fixed last (after no job
running thru devstack-gate is using *_ssh drivers is left) before removing
*_ssh drivers from ironic code.

Following is the per-project list of jobs that still deploy nodes via *_ssh
drivers and thus also rely on them being in enabled_drivers

ironic:
- gate-tempest-dsvm-ironic-multitenant-network-ubuntu-xenial
- gate-grenade-dsvm-ironic-ubuntu-xenial

ironic-inspector:
- gate-grenade-dsvm-ironic-inspector-ubuntu-xenial

python-ironicclient:
- gate-tempest-dsvm-python-ironicclient-src-ubuntu-xenial

I have assigned myself to the rfe bug [2] and will start putting up test
patches to ironic and devstack-gate to test the deploy driver change, and
then propose changes to devstack-gate/project-config when sure nothing gets
broken.

The whole switch might be a bit complicated due to project-config and
devstack-gate being branch-less, and we still have mitaka branch around,
which, while seeming to support testing with ipmitool+virtualbmc, has no
solid record of running such tests. However, Mitaka release is EOLed in
just one month [3], so even if there are problems, we could merge the
relevant changes after this date.

Additionally, while not depending on devstack/devstack-gate, bifrost
defaults to *_ssh drivers when in testing mode, and its functional jobs are
thus using *_ssh drivers. The series of patches to switch away from them is
on review [4] and the last one already passes relevant CI jobs.

[0] https://github.com/openstack-infra/devstack-gate/blob/
4eade8fab85dca475b0dd8d54d98649e6cdfcd57/devstack-vm-gate.sh#L416-L422
[1] https://github.com/openstack-infra/devstack-gate/blob/
24a6ed073b547fbbd484157e544b4bc10dda8880/devstack-vm-gate-wrap.sh#L235
[2] https://bugs.launchpad.net/ironic/+bug/1570301
[3] https://releases.openstack.org/#release-series
[4] https://review.openstack.org/#/q/status:open+project:
openstack/bifrost+branch:master+topic:bug/1659876

Cheers,

Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Ansible OpenStack dnsmasq errors in syslog

2017-03-01 Thread Andy McCrae
Hi Lawrence,

Thanks for raising this.
The 10.255.255.x range is used by lxcbr0 on the host, to assign the eth0
address to the containers.
In essence this won't cause any issues - since we use the eth1 address
(which is statically configured) and we add the host entries to /etc/hosts.

That said, I see the logs you are mentioning, and whilst this won't cause
any issues I think we should look to resolve those logs if possible - I'll
create a bug for that.

https://bugs.launchpad.net/openstack-ansible/+bug/1668949

We do bug-triage once a week (on a Tuesday at 16:00 UTC) - so we'll
determine priority and if it's something worth fixing.

If you run into anymore issues or need help feel free to jump into
#openstack-ansible on Freenode, the channel is usually quite active, and
there are normally other deployers/operators or developers working with or
on OpenStack-Ansible who can help.

Thanks,
Andy


On 1 March 2017 at 10:51, Lawrence J. Albinson 
wrote:

> In the process of diagnosing an Ansible OpenStack multi-node build
> problem, I came across the following dnsmasq-dhcp errors in syslog.
>
> —— snip ——
> Feb 28 20:25:06 xh3 dnsmasq-dhcp[1956]: not giving name
> aio1-designate-container-f182826f to the DHCP lease of 10.255.255.226
> because the name exists in /etc/hosts with address 172.29.238.255
> Feb 28 20:25:06 xh3 dnsmasq-dhcp[1956]: not giving name
> aio1-swift-proxy-container-c298481d to the DHCP lease of 10.255.255.126
> because the name exists in /etc/hosts with address 172.29.237.136
> Feb 28 20:25:06 xh3 dnsmasq-dhcp[1956]: not giving name
> aio1-gnocchi-container-48fe5bde to the DHCP lease of 10.255.255.136
> because the name exists in /etc/hosts with address 172.29.239.156
> —— snip ——
>
> To simplify the situation, I built the All-in-One environment on a clean
> KVM-based Xenial 16.04.2 VM with a single non-DHCP NIC. The errors occur
> there too. This happens with both Ansible-OpenStack 15.0.0.0rc1 and 14.0.8.
>
> Are these errors a sign of misconfiguration on my part? Or are they a sign
> of a real problem or just noise?
>
> OpenStack itself would appear to be working.
>
> Kindest regards, Lawrence
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][keystone] Pike PTG recap - quotas

2017-03-01 Thread John Garbutt
On 27 February 2017 at 21:18, Matt Riedemann  wrote:
> We talked about a few things related to quotas at the PTG, some in
> cross-project sessions earlier in the week and then some on Wednesday
> morning in the Nova room. The full etherpad is here [1].
>
> Counting quotas
> ---
>
> Melanie hit a problem with the counting quotas work in Ocata with respect to
> how to handle quotas when the cell that an instance is running in is down.
> The proposed solution is to track project/user ID information in the
> "allocations" table in the Placement service so that we can get allocation
> information for quota usage from Placement rather than the cell. That should
> be a relatively simple change to move this forward and hopefully get the
> counting quotas patches merged by p-1 so we have plenty of burn-in time for
> the new quotas code.
>
> Centralizing limits in Keystone
> ---
>
> This actually came up mostly during the hierarchical quotas discussion on
> Tuesday which was a cross-project session. The etherpad for that is here
> [2]. The idea here is that Keystone already knows about the project
> hierarchy and can be a central location for resource limits so that the
> various projects, like nova and cinder, don't have to have a similar data
> model and API for limits, we can just make that common in Keystone. The
> other projects would still track resource usage and calculate when a request
> is over the limit, but the hope is that the calculation and enforcement can
> be generalized so we don't have to implement the same thing in all of the
> projects for calculating when something is over quota.
>
> There is quite a bit of detail in the nova etherpad [1] about overbooking
> and enforcement modes, which will need to be brought up as options in a spec
> and then projects can sort out what makes the most sense (there might be
> multiple enforcement models available).
>
> We still have to figure out the data migration plan to get limits data from
> each project into Keystone, and what the API in Keystone is going to look
> like, including what this looks like when you have multiple compute
> endpoints in the service catalog, or regions, for example.
>
> Sean Dague was going to start working on the spec for this.
>
> Hierarchical quota support
> --
>
> The notes on hierarchical quota support are already in [1] and [2]. We
> agreed to not try and support hierarchical quotas in Nova until we were
> using limits from Keystone so that we can avoid the complexity of both
> systems (limits from Nova and limits from Keystone) in the same API code. We
> also agreed to not block the counting quotas work that melwitt is doing
> since that's already valuable on its own. It's also fair to say that
> hierarchical quota support in Nova is a Queens item at the earliest given we
> have to get limits stored in Keystone in Pike first.
>
> Dealing with the os-qouta-class-sets API
> 
>
> I had a spec [3] proposing to cleanup some issues with the
> os-quota-class-sets API in Nova. We agreed that rather than spend time
> fixing the latent issues in that API, we'd just invest that time in storing
> and getting limits from Keystone, after which we'll revisit deprecating the
> quota classes API in Nova.
>
> [1] https://etherpad.openstack.org/p/nova-ptg-pike-quotas
> [2] https://etherpad.openstack.org/p/ptg-hierarchical-quotas
> [3] https://review.openstack.org/#/c/411035/

I started a quota backlog spec before the PTG to collect my thoughts here:
https://review.openstack.org/#/c/429678

I have updated that post summit to include updated details on
hierarchy (ln134) when using keystone to store the limits. This mostly
came from some side discussions in the API-WG room with morgan and
melwitt.

It includes a small discussion on how the idea behind quota-class-sets
could be turned into something usable, although that is now a problem
for keystone's limits API.

There were some side discussion around the move to placement meaning
ironic quotas move from vCPU and RAM to custom resource classes. Its
worth noting this largely supersedes the ideas we discussed here in
flavor classes:
http://specs.openstack.org/openstack/nova-specs/specs/backlog/approved/flavor-class.html

I don't currently plan on taking that backlog spec further, as sdague
is going to take moving this all forward.

Thanks,
John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Pike PTG recap - notifications

2017-03-01 Thread Balazs Gibizer

Hi,

We discussed couple of notification related items last week on the PTG 
[1].



Searchlight related notification enhancements
-
We decided that the Extending versioned notifications for searchlight 
integration blueprint [2] has high priority for Pike to help the 
Nova-Searchlight integration which is needed for the cellv2 work. See 
more details in the Matt's recap on the cells discussion [3].
Code is already in good shape for everything except the BDM part of the 
bp.


Short circuit notification generation
-
We agreed that we want to avoid generating the notification payloads if 
nova or oslo messaging is configured so that the actual notification 
will not be sent. There will be no new configuration parameter to turn 
off the notification payload generation and the existing 
notification_format and oslo_messaging_notifications.driver 
configuration parameters will be used in the implementation. A WIP 
patch is already proposed [4].


Versioned notification transformation work
--
We agreed that it would help cores to review the patches if the subteam 
could keep a short list (about 5 items)  of patches that the cores 
should look at. So I will make sure that such a list will be up to date 
on the priority etherpad [5].
Also we agreed that I will send out a short weekly mail about the items 
in focus, similarly to the placement status mails. Also we hope that 
the Nova-Searchlight integration will provide some focus and motivation 
by showing the impact of this work.



Cheers,
gibi

[1] https://etherpad.openstack.org/p/nova-ptg-pike
[2] 
https://blueprints.launchpad.net/nova/+spec/additional-notification-fields-for-searchlight
[3] 
http://lists.openstack.org/pipermail/openstack-dev/2017-February/112996.html

[4] https://review.openstack.org/#/c/428260
[5] https://etherpad.openstack.org/p/pike-nova-priorities-tracking




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Ansible OpenStack dnsmasq errors in syslog

2017-03-01 Thread Lawrence J. Albinson
In the process of diagnosing an Ansible OpenStack multi-node build problem, I 
came across the following dnsmasq-dhcp errors in syslog.

-- snip --
Feb 28 20:25:06 xh3 dnsmasq-dhcp[1956]: not giving name 
aio1-designate-container-f182826f to the DHCP lease of 10.255.255.226 because 
the name exists in /etc/hosts with address 172.29.238.255
Feb 28 20:25:06 xh3 dnsmasq-dhcp[1956]: not giving name 
aio1-swift-proxy-container-c298481d to the DHCP lease of 10.255.255.126 because 
the name exists in /etc/hosts with address 172.29.237.136
Feb 28 20:25:06 xh3 dnsmasq-dhcp[1956]: not giving name 
aio1-gnocchi-container-48fe5bde to the DHCP lease of 10.255.255.136 because the 
name exists in /etc/hosts with address 172.29.239.156
-- snip --

To simplify the situation, I built the All-in-One environment on a clean 
KVM-based Xenial 16.04.2 VM with a single non-DHCP NIC. The errors occur there 
too. This happens with both Ansible-OpenStack 15.0.0.0rc1 and 14.0.8.

Are these errors a sign of misconfiguration on my part? Or are they a sign of a 
real problem or just noise?

OpenStack itself would appear to be working.

Kindest regards, Lawrence
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] Single core review for patch approval

2017-03-01 Thread Beth Elwell
Has there been any consideration of growing the core team to help with review 
bandwidth? I ask only because that resulting responsibility to the community 
can drive additional review activity. Just worried that only 1x +2 could cause 
issues with code being merged on a project this large that could potentially 
break things or clash with other opinions or standards of how it should be 
written/implemented? It concerns me that it makes it easier to overlook larger 
things in more substantial patches. I guess as you say, there needs to be 
accountability re not always going for the single +2 when the patch is of that 
sort of size and you need a second opinion?

Beth

> On 28 Feb 2017, at 10:09, Rob Cresswell  wrote:
> 
> Hey everyone,
> 
> Horizon is moving to requiring only a single core review for code approval. 
> Note that cores are not obliged to approve on a single +2; if a core would 
> like a second opinion for patches that are complex or high risk, that is also 
> fine.
> 
> We still require at least one of the core reviewers or contributor on a patch 
> to be from separate companies however. For example, if a patch is authored by 
> someone from Cisco, then I could not (as a Cisco employee) +2+w the patch by 
> myself; it would require at least another core +2.
> 
> This should help us move smaller patches along quicker.
> 
> Rob
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [monasca] cassandra support in Monasca

2017-03-01 Thread witold.be...@est.fujitsu.com
Hi Pradeep,

HPE has investigated the use of Cassandra for Monasca and came to the 
conclusion that it is not a good choice. One of the problems is that Cassandra 
is not performant when querying across big data sets with many dimension 
key-value combinations. As KairosDB is based on Cassandra it suffers from the 
same problems.

The excerpt from KariosDB documentation [1]:

“So if you have a million tag/value combinations, phase 1 will always take a 
few seconds to complete. Phase 2 could be fast if you filter by tags so that 
only a few partitions are read or it could be slow if you have to read the data 
from all one million partitions.

How do you know if what you are planning is going to be fast enough? From our 
experience if you have 10's of thousands of tag combinations you will be fine. 
Once you start getting towards a million you will be in for trouble.”


The existing code for Cassandra is planned to be removed.


Greetings
Witek

[1] https://github.com/kairosdb/kairosdb/wiki/Query-Performance


From: Pradeep Singh [mailto:ps4openst...@gmail.com]
Sent: Mittwoch, 1. März 2017 06:51
To: Shinya Kawabata 
Cc: OpenStack Development Mailing List (not for usage questions) 
; roland.hochm...@hpe.com
Subject: Re: [openstack-dev] [monasca] cassandra support in Monasca

Hello,

I have registered a BP[1] for that.
Please let me know if you have any concerns.

[1]https://blueprints.launchpad.net/monasca/+spec/kairosdb-support


Thanks,
Pradeep Singh

On Fri, Feb 24, 2017 at 2:20 PM, Shinya Kawabata 
> wrote:
Hi Pradeep

Basic Cassandra support is already implemented.
But there are some performance problems.
These performance problems are difficult to enhance.
So we are planning to switch other DB and will let Cassandra deprecated.
KariosDB is one of candidates and not implemented yet.
KariosDB is build-on Cassandra so you might have problems we had.

Regards
Shinya Kawabata

> -Original Message-
> From: Pradeep Singh 
> [mailto:ps4openst...@gmail.com]
> Sent: Thursday, February 23, 2017 2:39 PM
> To: OpenStack Development Mailing List (not for usage questions)
> >
> Cc: Kawabata Shinya 
> >;
> santhosh.fernan...@gmail.com
> Subject: [openstack-dev][monasca] cassandra support in Monasca
>
> Hello,
>
> Could you please suggest status of Cassandra support in Monasca.
> Is there any plan to support  kairosdb?
> If it is not implemented then we can take kairosdb support work.
>
> Thanks,
> Pradeep Singh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kuryr] 1st day recording for the Virtual Team Gathering

2017-03-01 Thread Antoni Segura Puimedon
Hi Kuryrs!

Thank you all for joining yesterday. For those unable to make it, here
are the recordings for the sessions:

https://youtu.be/Hdn9LOnCrSc

https://youtu.be/6D5iGEkKtGc

See you at today's sessions!

Toni

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]weekly meeting of Mar. 1

2017-03-01 Thread joehuang
Hello, team,

Before the new weekly meeting is settled, we'll still hold the weekly meeting 
in regular time slot: UTC 13:00~UTC 14:00

Agenda of Mar. 1 weekly meeting:

  1.  Pike release schedule: https://releases.openstack.org/pike/schedule.html
  2.  Launch pad blueprint registration and spec
  3.  Pike features development
  4.  Open Discussion

How to join:
#  IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on 
every Wednesday starting from UTC 13:00.


If you  have other topics to be discussed in the weekly meeting, please reply 
the mail.


Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Security Worries about Network RBAC

2017-03-01 Thread Adrian Turjak
Hello Kevin,Thanks for the prompt response! This is fantastic. I'll throw up a blueprint together tomorrow.Backwards compatibility is the biggest issue, as anyone currently using the feature and assuming no approval step is going to be hit by it. The only sensible solution I can see being easy to accomplish is to make the change as a config setting that a deployer has to turn on. Then should someone want the approval step, they can issue a deprecation warning and eventually make the switch. Private clouds likely wouldn't turn the acceptance workflow on but for public clouds like us it would work. Also for deployers yet to expose the feature, they can make the change as they open the policy for the service.Trying to fit both versions (no acceptance, required acceptance) together would be a mess. Best to just offer the option as which is wanted to the deployer and avoid the pain of trying to safely and securely do both when they conflict.The ability to limit it to the same project tree or a project you have roles in would be nice, but I fully agree that trying to introduce a connection here to Keystone could be a pain. If made as a another configuration option it could work possibly. Neutron already has a keystone admin user, and doing the required calls to keystone here wouldn't be too hard. Checking for the same tree is an easy upwards traversal, once root is reached, compare root for both projects, sadly more than one API call, but not too bad. User role checking is easy, and just a single call to the role assignments API. As for rechecking, I wouldn't bother. Projects can't be reparented, and while user roles can change it is mostly safe to assume that this network sharing was safe due to them having a role originally. No polling needed. My idea here was to do the checks, and if neither was true, then require acceptance.At any rate, even just an acceptance workflow would solve my core problem, but I'll write up the proposal for the full plan, and we can redesign from there!Regards,Adrian TurjakOn 1/03/2017 9:27 PM, Kevin Benton  wrote:Hi Adrian,Thanks for the write-up.I think adding an approval workflow to Neutron is a reasonable feature request. It probably wouldn't be back-portable because it's going to require an API change and a new column in the database for approval state so you would have to patch it in manually in your cloud (assuming you don't want to wait for Pike).The tricky part is going to be figuring out how to handle API backward compatibility. Requiring an extra step before a project is allowed to use a network shared to it would break existing automation that depends on the current workflow. Please file an request for enhancement against Neutron[1] and we can continue the discussion of how to implement this on the bug report.As for your option 2, the reason Neutron can't do something like that automatically right now is due to a lack of strong Keystone integration. Outside of the middleware that authenticates requests, Neutron doesn't even know keystone exists. We have no way to prevent changes on the keystone side that would violate the current RBAC policies (e.g. a user is using a network that they wouldn't be able to use after a keystone modification). We also have no framework in place to even see keystone alterations when they happen so it would require constant background polling.1. https://docs.openstack.org/developer/neutron/policies/blueprints.html#neutron-request-for-feature-enhancementsCheers,Kevin BentonOn Tue, Feb 28, 2017 at 7:43 PM, Adrian Turjak  wrote:Hello Openstack-Devs,

I'm just trying to find out if there is any proposed work to make the
network RBAC a bit safer.

For context, I'm part of a company running a public cloud and we would
like to expose Network RBAC to customers who have multiple projects so
that they can share networks between them. The problem is that the
network RBAC policy is too limited.

If you know the project id, you can share a network to that project id.
This allows you to name a network 'public' or 'default' and share it to
others in hopes of them connecting to it where you then potentially
compromise their instances. Effectively this allows honey-pot networks.
The only layer of safely you have is first needing to gleam a project id
before you can do this, effectively security through obscurity, which is
a terrible approach.

Ideally network RBAC should work the same as image sharing in Glance.
You share a network, and the other project must accept it before it
becomes usable. This doesn't stop a too trusting party from accepting an
unsafe network, but it means they have some warning before doing
anything silly. Otherwise the onus is on them to always be vigilant for
shared networks that shouldn't be there, which is not exactly something
we want our customers to have to worry about.

Are there plans to implement some sort of acceptance process for network
RBAC in Neutron, or a willingness to? Other 

  1   2   >