[openstack-dev] [QA][gate][all] dsvm gate stability and scenario tests

2017-03-02 Thread Andrea Frittoli
Hello folks,

we discussed a lot since the PTG about issues with gate stability; we need
a stable and reliable gate to ensure smooth progress in Pike.

One of the issues that stands out is that most of the times during test
runs our test VMs are under heavy load.
This can be the common cause behind several failures we've seen in the
gate, so we agreed during the QA meeting yesterday [0] that we're going to
try reducing the load and see whether that improves stability.

Next steps are:
- select a subset of scenario tests to be executed in the gate, based on
[1], and run them serially only
- the patch for this is [2] and we will approve this by the end of the day
- we will monitor stability for a week - if needed we may reduce
concurrency a bit on API tests as well, and identify "heavy" tests
candidate for removal / refactor
- the QA team won't approve any new test (scenario or heavy resource
consuming api) until gate stability is ensured

Thanks for your patience and collaboration!

Andrea

---
irc: andreaf

[0] http://eavesdrop.openstack.org/meetings/qa/2017/qa.2017-03-02-17.00.txt
[1] https://ethercalc.openstack.org/nu56u2wrfb2b
[2] https://review.openstack.org/#/c/439698/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [gluon] Regarding multi-network and multi-tenant support

2017-03-02 Thread mohammad shahid
Hi Team,

I have question regarding multi-networks and multi-tenants
Currently Gluon supported only one network and subnet i.e GluonNetwork and
GluonSubnet respectively
which i can see hard code values at path
https://github.com/openstack/gluon/blob/aa7edbf878c64829ef2e028c8cd0e5bb36ea1d51/gluon/plugin/core.py
#line number 164 and 174
Please let know if you have plan to add mult-inetwork functionality ?



Thanks and Regards,
Mohammad Shahid
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance] image import virtual meetup

2017-03-02 Thread Brian Rosmaita
As discussed in today's Glance meeting, we'll have a one-hour virtual
meetup next week to get things rolling.  Please indicate your
availability on this doodle poll:

http://doodle.com/poll/eh2mk4fdwgd4sv7b

If none of those time options work, reply to this email with a
counter-proposal.  Please respond before the end of business on Friday 3
March in your time zone.

thanks,
brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] Create backup with snapshots

2017-03-02 Thread 王玺源
Hi cinder team:
We met a problem about backup create recently.

The backup can be created from volumes or snapshots. In the both cases, the
volume' s status is set to 'backing-up'.

But as I know, when users create backup with snapshots, the volume is not
used(Correct me if I'm wrong). So why the volume's status is changed ?
Should it keep available? It's a little strange that the volume is
"backing-up" but actually only the snapshot is used for backup creation.
the volume in "backing-up" means that it can't be used for some other
actions. Such as: attach, delete, export to image, extend, create from
volume, create backup from volume and so on.

So is there any reason we changed the volume' status here? Or is there any
third part driver need the volume's status must be "backing-up" when create
backup from snapshots?

Thanks!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment][TripleO][kolla][ansible][fuel] Next steps for cross project collaboration

2017-03-02 Thread Emilien Macchi
On Mon, Feb 27, 2017 at 11:20 AM, Emilien Macchi  wrote:
> On Mon, Feb 27, 2017 at 11:02 AM, Steven Hardy  wrote:
>> Hi all,
>>
>> Over the recent PTG, and previously at the design summit in Barcelona,
>> we've had some productive cross-project discussions amongst the various
>> deployment teams.
>>
>> It's clear that we share many common problems, such as patterns for major
>> version upgrades (even if the workflow isn't identical we've all duplicated
>> effort e.g around basic nova upgrade workflow recently), container images
>> and other common building blocks for configuration management.
>>
>> Here's a non-exhaustive list of sessions where we had some good
>> cross-project discussion, and agreed a number of common problems where
>> collaboration may be possible:
>>
>> https://etherpad.openstack.org/p/ansible-config-mgt
>
> first action: EmilienM + bnemec to write a spec in oslo.config that
> aims to generate a file (YAML or JSON) with all parameters.

done (thanks Ben): https://review.openstack.org/#/c/440835

Please review it and give any feedback.

>> https://etherpad.openstack.org/p/tripleo-kolla-kubernetes
>>
>> https://etherpad.openstack.org/p/kolla-pike-ptg-images
>>
>> https://etherpad.openstack.org/p/fuel-ocata-fuel-tripleo-integration
>>
>> If there is interest in continuing the discussions on a more regular basis,
>> I'd like to propose we start a cross-project working group:
>>
>> https://wiki.openstack.org/wiki/Category:Working_Groups
>>
>> If I go ahead and do this is "deployment" a sufficiently project-neutral
>> term to proceed with?
>
> Yes, +1 for Deployment WG. It's pretty clear that we saw more interest
> than before at the last PTG in Atlanta. It's time to do concrete
> things :-)
>
>> I'd suggest we start with an informal WG, which it seems just requires an
>> update to the wiki, e.g no need for any formal project team at this point?
>>
>> Likewise I know some folks have expressed an interest in an IRC channel
>> (openstack-deployment?), I'm happy to start with the ML but open to IRC
>> also if someone is willing to set up the channel.
>
> +1 for IRC channel.
>
>> Perhaps we can start by using the tag "deployment" in all cross-project ML
>> traffic, then potentially discuss IRC (or even regular meetings) if it
>> becomes apparrent these would add value beyond ML discussion?
>
> +1 too.
>
>> Please follow up here if anyone has other/better ideas on how to facilitate
>> ongoing cross-team discussion and I'll do my best to help move things
>> forward.
>
> Thanks for kicking it off!
>
>> Thanks!
>>
>> Steve
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][diskimage-builder] Status of diskimage-builder

2017-03-02 Thread Matthew Thode
On 03/02/2017 03:31 PM, Emilien Macchi wrote:
>>> 1) Move diskimage-builder into own (big tent?) project. Setup a new PTL,
>>> etc.
> Let's move forward with this one if everybody agrees on that.
> 
> DIB folks: please confirm on this thread that you're ok to move out
> DIB from TripleO and be an independent project.
> Also please decide if we want it part of the Big Tent or not (it will
> require a PTL).
> 
>>> 2) Move diskimage-builder into openstack-infra (fungi PTL).
> I don't think Infra wants to carry this one.
> 
>>> 3) Keep diskimage-builder under tripleo (EmilienM PTL).
> We don't want to carry this one anymore for the reasons mentioned in
> that thread.
> 

As a sometimes contributor to DIB for Gentoo stuff I'm fine with moving
it out into it's own project under the big tent, with a PTL and all.

-- 
Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Security Worries about Network RBAC

2017-03-02 Thread Adrian Turjak
Bug/RFE is up!

https://bugs.launchpad.net/neutron/+bug/1669630

Hopefully that sums of what I'm ideally after well enough, and is useful
to the greater community and project as a whole.

Cheers,
Adrian Turjak

On 01/03/17 22:00, Adrian Turjak wrote:
> Hello Kevin,
>
> Thanks for the prompt response! This is fantastic. I'll throw up a
> blueprint together tomorrow.
>
> Backwards compatibility is the biggest issue, as anyone currently
> using the feature and assuming no approval step is going to be hit by
> it. The only sensible solution I can see being easy to accomplish is
> to make the change as a config setting that a deployer has to turn on.
> Then should someone want the approval step, they can issue a
> deprecation warning and eventually make the switch. Private clouds
> likely wouldn't turn the acceptance workflow on but for public clouds
> like us it would work. 
>
> Also for deployers yet to expose the feature, they can make the change
> as they open the policy for the service.
>
> Trying to fit both versions (no acceptance, required acceptance)
> together would be a mess. Best to just offer the option as which is
> wanted to the deployer and avoid the pain of trying to safely and
> securely do both when they conflict.
>
> The ability to limit it to the same project tree or a project you have
> roles in would be nice, but I fully agree that trying to introduce a
> connection here to Keystone could be a pain. If made as a another
> configuration option it could work possibly. Neutron already has a
> keystone admin user, and doing the required calls to keystone here
> wouldn't be too hard. Checking for the same tree is an easy upwards
> traversal, once root is reached, compare root for both projects, sadly
> more than one API call, but not too bad. User role checking is easy,
> and just a single call to the role assignments API. As for rechecking,
> I wouldn't bother. Projects can't be reparented, and while user roles
> can change it is mostly safe to assume that this network sharing was
> safe due to them having a role originally. No polling needed. My idea
> here was to do the checks, and if neither was true, then require
> acceptance.
>
> At any rate, even just an acceptance workflow would solve my core
> problem, but I'll write up the proposal for the full plan, and we can
> redesign from there!
>
> Regards,
> Adrian Turjak
>
>
> On 1/03/2017 9:27 PM, Kevin Benton  wrote:
>
> Hi Adrian,
>
> Thanks for the write-up.
>
> I think adding an approval workflow to Neutron is a reasonable
> feature request. It probably wouldn't be back-portable because
> it's going to require an API change and a new column in the
> database for approval state so you would have to patch it in
> manually in your cloud (assuming you don't want to wait for Pike).
>
> The tricky part is going to be figuring out how to handle API
> backward compatibility. Requiring an extra step before a project
> is allowed to use a network shared to it would break existing
> automation that depends on the current workflow. 
>
>
> Please file an request for enhancement against Neutron[1] and we
> can continue the discussion of how to implement this on the bug
> report.
>
>
> As for your option 2, the reason Neutron can't do something like
> that automatically right now is due to a lack of strong Keystone
> integration. Outside of the middleware that authenticates
> requests, Neutron doesn't even know keystone exists. We have no
> way to prevent changes on the keystone side that would violate the
> current RBAC policies (e.g. a user is using a network that they
> wouldn't be able to use after a keystone modification). We also
> have no framework in place to even see keystone alterations when
> they happen so it would require constant background polling.
>
>
> 1. 
> https://docs.openstack.org/developer/neutron/policies/blueprints.html#neutron-request-for-feature-enhancements
>
>
> Cheers,
> Kevin Benton
>
> On Tue, Feb 28, 2017 at 7:43 PM, Adrian Turjak
> > wrote:
>
> Hello Openstack-Devs,
>
> I'm just trying to find out if there is any proposed work to
> make the
> network RBAC a bit safer.
>
> For context, I'm part of a company running a public cloud and
> we would
> like to expose Network RBAC to customers who have multiple
> projects so
> that they can share networks between them. The problem is that the
> network RBAC policy is too limited.
>
> If you know the project id, you can share a network to that
> project id.
> This allows you to name a network 'public' or 'default' and
> share it to
> others in hopes of them connecting to it where you then
> potentially
> compromise their instances. Effectively this allows 

Re: [openstack-dev] [all] Cloud-init VM instance not coming up in a multi-node DevStack envionment

2017-03-02 Thread SUZUKI, Kazuhiro
Hi Anil,

I met the same issue in my local devstack environment.
Do you use proxy when you execute stack.sh script in your environment?
If so, please try to set no_proxy parameter.

See also:
https://answers.launchpad.net/devstack/+question/245480

Regards,
Kaz


From: Anil Rao 
Subject: [openstack-dev] [all] Cloud-init VM instance not coming up in a 
multi-node DevStack envionment
Date: Thu, 2 Mar 2017 03:27:33 +

> Hi,
> 
> I recently created a multi-node DevStack environment (based on stable/ocata) 
> made up of the following nodes:
> 
> 
> -1 Controller Node
> 
> -1 Network Node
> 
> -2 Compute Nodes
> 
> All VM instances are only deployed on the 2 compute nodes. Neutron network 
> services are provided by the network node.
> 
> I am able to create VMs and have them communicate with each other and also 
> with external (outside the DevStack environment) endpoints.
> 
> However, I find that I am unable to successfully deploy a VM instance that is 
> based on cloud-init. As the following console log snippet shows, cloud-init 
> running inside a VM instance is unable to get the necessary meta-data and 
> hangs during VM instance startup.
> 
> 
> 91.811361] cloud-init[977]: ci-info: 
> ++Net device 
> info+++
> [   91.818689] cloud-init[977]: ci-info: 
> ++--+--+---+---+---+
> [   91.825274] cloud-init[977]: ci-info: | Device |  Up  |   Address  
>   |  Mask | Scope | Hw-Address|
> [   91.832763] cloud-init[977]: ci-info: 
> ++--+--+---+---+---+
> [   91.839288] cloud-init[977]: ci-info: |   lo   | True |  127.0.0.1 
>   |   255.0.0.0   |   .   | . |
> [   91.857827] cloud-init[977]: ci-info: |   lo   | True |   ::1/128  
>   |   .   |  host | . |
> [   91.864806] cloud-init[977]: ci-info: |  eth0  | True | 
> 192.168.1.10 | 255.255.255.0 |   .   | fa:16:3e:cf:a8:d8 |
> [   91.871433] cloud-init[977]: ci-info: |  eth0  | True | 
> fe80::f816:3eff:fecf:a8d8/64 |   .   |  link | fa:16:3e:cf:a8:d8 |
> [   91.878237] cloud-init[977]: ci-info: 
> ++--+--+---+---+---+
> [   91.896344] cloud-init[977]: ci-info: 
> Route IPv4 
> info
> [   91.903652] cloud-init[977]: ci-info: 
> +---+-+-+-+---+---+
> [   91.912523] cloud-init[977]: ci-info: | Route |   Destination   |   
> Gateway   | Genmask | Interface | Flags |
> [   91.930131] cloud-init[977]: ci-info: 
> +---+-+-+-+---+---+
> [   91.936482] cloud-init[977]: ci-info: |   0   | 0.0.0.0 | 
> 192.168.1.1 | 0.0.0.0 |eth0   |   UG  |
> [   91.942651] cloud-init[977]: ci-info: |   1   | 169.254.169.254 | 
> 192.168.1.1 | 255.255.255.255 |eth0   |  UGH  |
> [   91.948624] cloud-init[977]: ci-info: |   2   |   192.168.1.0   |   
> 0.0.0.0   |  255.255.255.0  |eth0   |   U   |
> [   91.954798] cloud-init[977]: ci-info: 
> +---+-+-+-+---+---+
> [   91.961102] cloud-init[977]: 2017-03-01 17:46:38,723 - 
> url_helper.py[WARNING]: Calling 
> 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [0/120s]: 
> bad status code [404]
> [   92.997374] cloud-init[977]: 2017-03-01 17:46:39,917 - 
> url_helper.py[WARNING]: Calling 
> 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [1/120s]: 
> bad status code [404]
> [   94.320985] cloud-init[977]: 2017-03-01 17:46:41,240 - 
> url_helper.py[WARNING]: Calling 
> 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [2/120s]: 
> bad status code [404]
> [   95.480615] cloud-init[977]: 2017-03-01 17:46:42,400 - 
> url_helper.py[WARNING]: Calling 
> 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [3/120s]: 
> bad status code [404]
> [
> ...
> 
> [  118.589843] cloud-init[977]: 2017-03-01 17:47:05,509 - 
> url_helper.py[WARNING]: Calling 
> 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [27/120s]: 
> bad status code [404]
> [  121.796946] cloud-init[977]: 2017-03-01 17:47:08,716 - 
> url_helper.py[WARNING]: Calling 
> 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [30/120s]: 
> bad status code [404]
> [  124.918111] cloud-init[977]: 2017-03-01 17:47:11,837 - 
> url_helper.py[WARNING]: Calling 
> 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [33/120s]: 
> bad status code [404]
> [  129.195778] cloud-init[977]: 2017-03-01 17:47:16,115 - 
> url_helper.py[WARNING]: Calling 
> 

[openstack-dev] [release] Ocata Release countdown for R+2 Week, 6-10 March

2017-03-02 Thread Doug Hellmann
That's right, one more countdown email for Ocata.

Focus
-

The release team will be ready to tag the final releases for all
cycle-trailing projects on 8 March.

Release Tasks
-

Liaisons for cycle-trailing projects should prepare their final
release candidate tags by Monday 6 March. The release team will
prepare a patch showing the final release versions on Wednesday 7
March, and PTLs and liaisons for affected projects should +1. We
will then approve the final releases on Thursday 8 March.

If your project repositories are not already branched, please include
those instructions with the final release candidate instructions by
Monday.

General Notes
-

The members of the release team will be traveling to the joint
Board/TC/UC meeting next week, so we may be less responsive than
usual.

Important Dates
---

Ocata Trailing Release: 8 March

Ocata release schedule: http://releases.openstack.org/ocata/schedule.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Saga of servicec discovery (is it needed?)

2017-03-02 Thread Joshua Harlow

Julien Danjou wrote:

On Thu, Mar 02 2017, Joshua Harlow wrote:


So before I start to go much farther on this (and start to dive into what
people are doing and why the various implementations exist and proposing a
cross-project solution, tooz, or etcd, or zookeeper or other...) I wanted to
get general feedback (especially from the projects that have these kinds of
implementations) if this is a worthwhile path to even try to go down.


IIUC we use such a mechanism in a few Telemetry projects to share
information between agents.

For example, Gnocchi metricd workers talk together and provide each
others their numbers of CPU so they can distribute fairly the number of
jobs they should take care for. They use that same mechanism to know
if/how every agent is alive.

We've been using that technology for 3 years now, and we named it
"tooz". You may have heard of it. ;-)

Cheers,


Whats that, ha,

But ya, one of the outcomes is tooz; but I'm not even sure there is 
information being shared about the duplication that is happening 
(projects seem to silo themselves with regards to stuff like this).


So I guess one of the first goals would be to undo that siloing, though 
if nobody from the projects cares about doing things in the same way 
using the same toolkit, then I'd say we are more screwed then I thought, 
lol.


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][release][ptl] Adding docs to the release schedule

2017-03-02 Thread Brian Rosmaita
On 3/2/17 9:45 AM, Ian Cordasco wrote:
> -Original Message-
> From: Telles Nobrega 
> Reply: OpenStack Development Mailing List (not for usage questions)
> 
> Date: March 2, 2017 at 08:01:29
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Cc: openstack-d...@lists.openstack.org 
> Subject:  Re: [openstack-dev] [docs][release][ptl] Adding docs to the
> release schedule
> 
>> I really believe that this idea will makes us work harder on keeping our
>> docs in place and will make it for a better documented producted by release
>> date.
>> As shared before, I do believe that this is isn't easy and will demand a
>> lot of effort from some teams, specially smaller teams with too much to do,
>> but we from Sahara are on board with this approach and will try our best to
>> do so.
> 
> Most things worth doing are difficult. =) This seems to be one of
> them. If deliverable teams really work together, they may end up in a
> situation like Glance did this cycle where we kind of just sat on our
> hands after RC-1 was tagged. That time *could* have been better spent
> reviewing all of our documentation.

At the risk of sounding defensive here, what exactly are you referring
to?  The api-ref [0], the dev docs in the glance repo [1], and the api
docs in the glance-specs repo [2] all had patches up for review after RC-1.

[0] https://review.openstack.org/#/c/426603/
[1] https://review.openstack.org/#/c/429341/
[2] https://review.openstack.org/#/c/426605/

> I know projects go sideways all the time. I think this was Glance's
> first cycle like this in a few cycles. But if we can make a habit of
> creating excellent release candidates, we can spend the intermediate
> time on documentation. I think that's a good compromise.
> 
> --
> Ian Cordasco
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Freeze dates for Pike

2017-03-02 Thread Matt Riedemann
I mentioned this in the nova meeting today [1] but wanted to post to the 
ML for feedback.


We didn't talk about spec or feature freeze dates at the PTG. The Pike 
release schedule is [2].


Spec freeze
---

In Newton and Ocata we had spec freeze on the first milestone.

I'm proposing that we do the same thing for Pike. The first milestone 
for Pike is April 13th which gives us about 6 weeks to go through the 
specs we're going to approve. A rough look at the open specs in Gerrit 
shows we have about 125 proposed and some of those are going to be 
re-approvals from previous releases. We already have 16 blueprints 
approved for Pike. Keep in mind that in Newton we had ~100 approved 
blueprints by the freeze and completed or partially completed 64.


Feature freeze
--

In Newton we had a non-priority feature freeze between n-1 and n-2. In 
Ocata we just had the feature freeze at o-3 for everything because of 
the short schedule.


We have fewer core reviewers so I personally don't want to cut off the 
majority of blueprints too early in the cycle so I'm proposing that we 
do like in Ocata and just follow the feature freeze on the p-3 milestone 
which is July 27th.


We will still have priority review items for the release and when push 
comes to shove those will get priority over other review items, but I 
don't think it's helpful to cut off non-priority blueprints before n-3. 
I thought there was a fair amount of non-priority blueprint code that 
landed in Ocata when we didn't cut it off early. Referring back to the 
Ocata blueprint burndown [3] most everything was completed between the 
2nd milestone and feature freeze.


--

Does anyone have an issue with this plan? If not, I'll update [4] with 
the nova-specific dates.


[1] 
http://eavesdrop.openstack.org/meetings/nova/2017/nova.2017-03-02-21.00.log.html#l-119

[2] https://wiki.openstack.org/wiki/Nova/Pike_Release_Schedule
[3] 
http://lists.openstack.org/pipermail/openstack-dev/2017-February/111639.html

[4] https://releases.openstack.org/pike/schedule.html

--

Thanks,

Matt Riedemann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Saga of servicec discovery (is it needed?)

2017-03-02 Thread Julien Danjou
On Thu, Mar 02 2017, Joshua Harlow wrote:

> So before I start to go much farther on this (and start to dive into what
> people are doing and why the various implementations exist and proposing a
> cross-project solution, tooz, or etcd, or zookeeper or other...) I wanted to
> get general feedback (especially from the projects that have these kinds of
> implementations) if this is a worthwhile path to even try to go down.

IIUC we use such a mechanism in a few Telemetry projects to share
information between agents.

For example, Gnocchi metricd workers talk together and provide each
others their numbers of CPU so they can distribute fairly the number of
jobs they should take care for. They use that same mechanism to know
if/how every agent is alive.

We've been using that technology for 3 years now, and we named it
"tooz". You may have heard of it. ;-)

Cheers,
-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][infra] does someone cares about Jenkins? I stopped.

2017-03-02 Thread Marcin Juszkiewicz
W dniu 02.03.2017 o 20:19, Joshua Harlow pisze:
>> 1. Kolla output is nightmare to debug.
>>
>> There is --logs-dir option to provide separate logs for each image build
>> but it is not used. IMHO it should be as digging through such logs is
>> easier.
>>
> 
> I to find the kolla output a bit painful, and I'm willing to help
> improve it, what would you think is a better approach (that we can try
> to get to).

Once I discovered --logs-dir option I stopped caring of normal kolla
output. If Jenkins jobs could be changed to make use of it and to
provide those logs it would make not only me happy.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][telemetry] gate breakage

2017-03-02 Thread gordon chung


On 02/03/17 03:19 PM, Matt Riedemann wrote:
> This is the fix:
>
> https://review.openstack.org/#/c/440739/

yay! thanks for the help!

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] - no drivers meeting today and RFEs

2017-03-02 Thread Kevin Benton
Hi,

I'm canceling the drivers meeting today due to a last minute conflict for
me.

If you have a feature you want for Pike, try to get the RFE filed soon
otherwise you will risk the chance of us being over-committed for Pike if
you submit it later in the cycle.

Cheers,
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][diskimage-builder] Status of diskimage-builder

2017-03-02 Thread Emilien Macchi
On Thu, Jan 12, 2017 at 3:06 PM, Yolanda Robla Mota  wrote:
> From my point of view, i've been using that either on infra with
> puppet-infracloud, glean.. and now with TripleO. So in my opinion, it shall
> be an independent project, with core contributors from both sides.
>
> On Thu, Jan 12, 2017 at 8:51 PM, Paul Belanger 
> wrote:
>>
>> On Thu, Jan 12, 2017 at 02:11:42PM -0500, James Slagle wrote:
>> > On Thu, Jan 12, 2017 at 1:55 PM, Emilien Macchi 
>> > wrote:
>> > > On Thu, Jan 12, 2017 at 12:06 PM, Paul Belanger
>> > >  wrote:
>> > >> Greetings,
>> > >>
>> > >> With the containerization[1] of tripleo, I'd like to know more about
>> > >> the future of
>> > >> diskimage-builder as it relates to the tripleo project.
>> > >>
>> > >> Reading the recently approved spec for containers, container (image)
>> > >> builds are
>> > >> no longer under the control of tripleo; by kolla. Where does this
>> > >> leave
>> > >> diskimage-builder as a project under tripleo?  I specifically ask,
>> > >> because I'm
>> > >> wanting to start down the path of using diskimage-builder as an
>> > >> interface to
>> > >> containers.
>> > >>
>> > >> Basically, is it time to move diskimage-builder out from the tripleo
>> > >> project
>> > >> into another, or its own? Or is tripleo wanting to more forward on
>> > >> development
>> > >> efforts on diskimage-builder.
>> > >
>> > > Looking at stats on who is actively contributing into DIB:
>> > > http://stackalytics.com/?module=diskimage-builder
>> > >
>> > > It seems that we have some folks from infra and some folks on dib
>> > > only, and a few contributors from TripleO.
>> > >
>> > > I guess the best option is to ask DIB contributors: do you want to own
>> > > the project you're committing to?
>> > > If not, is it something that should stay in TripleO (imo no) or move
>> > > into openstack-infra (imo yes, if infra agrees).
>> > >
>> > > With my PTL hat, I'm really open to this thing and I wouldn't mind to
>> > > transfer ownership to another group.
>> >
>> > I was under the impression it was already it's own project team based
>> > on:
>> > http://lists.openstack.org/pipermail/openstack-dev/2016-July/099805.html
>> >
>> > It looks like the change was never made in governance however.
>> >
>> Yes, it just looks like Greg created new core reviewers, not officially
>> breaking
>> away from tripleo.
>>
>> If everybody is on board with moving diskimage-builder outside of tripleo,
>> we
>> need to decided where it lives. Two options come to mind:
>>
>> 1) Move diskimage-builder into own (big tent?) project. Setup a new PTL,
>> etc.

Let's move forward with this one if everybody agrees on that.

DIB folks: please confirm on this thread that you're ok to move out
DIB from TripleO and be an independent project.
Also please decide if we want it part of the Big Tent or not (it will
require a PTL).

>> 2) Move diskimage-builder into openstack-infra (fungi PTL).

I don't think Infra wants to carry this one.

>> 3) Keep diskimage-builder under tripleo (EmilienM PTL).

We don't want to carry this one anymore for the reasons mentioned in
that thread.

>> Thoughts?
>>
>> > The reason I -1'd Paul's TripleO spec and suggested it be proposed to
>> > diskimage-builder was due to:
>> > http://lists.openstack.org/pipermail/openstack-dev/2016-June/098560.html
>> > and
>> > https://review.openstack.org/#/c/336109/
>> >
>> > I just want to make sure the right set of reviewers who are driving
>> > dib development see the spec proposal.
>> >
>> > --
>> > -- James Slagle
>> > --
>> >
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Yolanda Robla Mota
> NFV Partner Engineer
> yrobl...@redhat.com
> +34 605641639
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][ptl][goals] Community Goals for Pike

2017-03-02 Thread Emilien Macchi
Greetings,

During the PTG we discussed about the two goals that we picked for Pike:
https://governance.openstack.org/tc/goals/pike/index.html

Like we did during Ocata cycle, PTLs have to update the governance
repository with the artifacts planned or done in their projects.

Some guidance for PTLs who aren't familiar with this process:

1. If your project doesn't need to do anything to meet a Goal, you
still need to update the Governance repo. See:
https://review.openstack.org/#/c/398460/1/goals/ocata/remove-incubated-oslo-code.rst

Acknowledgement is *required* for all projects part of the Big Tent.

2. If your project needs to produce some artifacts to meet a Goal,
please update the Governance repo. See:
https://review.openstack.org/#/c/394056/1/goals/ocata/remove-incubated-oslo-code.rst

Please use the Gerrit topics to make Goals review easier:
goal-deploy-api-in-wsgi
goal-python35

It would be great to document all artifacts by the end of Pike-1 (both
Planned & Completed), so it gives an overview of where we are between
milestones.
If you need have any question about Community Goals, feel free to
reach me on #openstack-dev (freenode), my nickname is EmilienM; or my
email.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][telemetry] gate breakage

2017-03-02 Thread Matt Riedemann

On 3/2/2017 10:26 AM, Matt Riedemann wrote:


Tracking this under bug:

https://bugs.launchpad.net/nova/+bug/1669473

The details are in the bug. We've figured out the root cause and I've
got a workaround patch up in nova and Mehdi has a workaround patch up in
devstack, and we're testing the nova workaround here:

https://review.openstack.org/#/c/440657/



This is the fix:

https://review.openstack.org/#/c/440739/

--

Thanks,

Matt Riedemann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][swg][all] Leadership training sign-up is open!

2017-03-02 Thread Colette Alexander
Hello Stackers!

We're opening up the sign-up sheet for leadership training beyond the TC,
Board and Foundation staff to the whole community.

What: Leadership Training, customized for members of the OpenStack Community
When: April 11/12/13
Where: ZingTrain, in Ann Arbor, Michigan.

If you can confirm your ability to attend, please sign up here:
https://etherpad.openstack.org/p/Leadershiptraining

There are a lot of details on that etherpad about timing, place,
recommended locations to stay, etc. You can also scroll down to read a
sample itinerary of subjects covered in the training. If you have any
questions at all, feel free to ask questions in this thread, or in
#openstack-swg as many folks who lurk in that channel have attended.

Some quick notes:

   - This is the exact same training that was done last year
   - We're planning on having 1-2 people attend who also attended training
   last year to give some context and continuity to the discussions.
   - The training costs are fully funded by the Foundation, and attendees
   only need to cover the cost of travel, lodging, and some meals (breakfast
   and lunch during training is provided).
   - We're capping attendance at 20 people.
   - I'd like to finalize our list by March 24th, so please start the work
   of getting travel approvals, etc., now.

If there are any past attendees who'd like to chime in in response to this
to discuss your experiences with training, please do! Please feel free to
ping me on IRC (I'm gothicmindfood) if you have any other questions.

Thanks so much,

-colette/gothicmindfood
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][stable][requirements] Modifying global-requirements to cap xstatic package versions

2017-03-02 Thread Matthew Thode
On 03/02/2017 12:57 PM, Doug Hellmann wrote:
> Right, friends don't let friends cap dependencies.
> 
> Let's work on getting constraints rolled out where needed instead.

This is the basic response I have to this.  More specifically it can
cause more churn in consuming projects, even if it's done perfectly.  If
it's not done perfectly we have to deal with untangling the requirements
for a security update (or the like).

-- 
Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] [keystone] No Keystone-Horizon cross project meeting today

2017-03-02 Thread Rob Cresswell
Hey everyone,

Sorry for the late notice, but there will be no Horizon-Keystone cross project 
meeting this week, as we've little to discuss with the PTG so recent. The 
meeting will resume as normal next week.

For those interested in joining, see 
http://eavesdrop.openstack.org/#Keystone/Horizon_Collaboration_Meeting

Rob
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][infra] does someone cares about Jenkins? I stopped.

2017-03-02 Thread Joshua Harlow

Marcin Juszkiewicz wrote:

I am working on some improvements for Kolla. Part of that work is
sending patches for review.

Once patch is set for review.openstack.org there is a set of Jenkins
jobs started to make sure that patch does not break already working
code. And this is good thing.

How it is done is not good ;(

1. Kolla output is nightmare to debug.

There is --logs-dir option to provide separate logs for each image build
but it is not used. IMHO it should be as digging through such logs is
easier.



I to find the kolla output a bit painful, and I'm willing to help 
improve it, what would you think is a better approach (that we can try 
to get to).


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Saga of servicec discovery (is it needed?)

2017-03-02 Thread Joshua Harlow
So this is a general start of a large discussion that is similar to the 
other one I started[1], and this time around I wanted to start this on 
the mailing instead of a spec first approach.


The general question is around something I keep on noticing popping up 
in various projects and worry about what we as a community are doing 
with regard to diversity of implementation (and what can we do to make 
this better).


What seems to be being recreated (in various forms) is that multiple 
projects seem to have a need to store ephemeral data about some sort of 
agent (ie nova and its compute node agents, neutron and its agents, 
octavia and its amphora agents...) in a place that can also track the 
liveness (whether that component is up/down/dead/alive) of that agents 
(in general we can replace agent with service and call this service 
discovery).


It appears (correct me if I am wrong) the amount of ephemeral metadata 
associated with this agent typically seems to be somewhat minimal, and 
any non-ephemeral agent data should be stored somewhere else (ie a 
database).


I think it'd be great from a developer point of view to start to address 
this, and by doing so we can make operations of these various projects 
and there agents that much easier (it is a real PITA when each project 
does this differently, it makes cloud maintenance procedures that much 
more painful, because typically while doing maintenance you need to 
ensure these agents are turned off, having X different ways to do this 
with Y different side-effects makes this ummm, not enjoyable).


So before I start to go much farther on this (and start to dive into 
what people are doing and why the various implementations exist and 
proposing a cross-project solution, tooz, or etcd, or zookeeper or 
other...) I wanted to get general feedback (especially from the projects 
that have these kinds of implementations) if this is a worthwhile path 
to even try to go down.


IMHO it is, though it may once again require the community as a group to 
notice things are being done differently that are really the same and 
people caring enough to actually want to resolve this situation (in 
general I really would like the architecture working group to be able to 
proactively resolving these issues before they get created, 
retroactively trying to resolve these differences is not a 
healthy/sustainable thing we should be doing).


Thoughts?

[1] https://lwn.net/Articles/662140/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][neutron] bot bumping patches off gate queue

2017-03-02 Thread Doug Hellmann
Excerpts from Ihar Hrachyshka's message of 2017-03-02 10:18:16 -0800:
> Any updates on the problem? It still bumps patches off gate queue,
> resetting it and backing off progress for the whole integrated gate.
> Is the work followed up somewhere? Any patches to chew?

I suspect this is a case where we could use a hand writing the needed
patch.

Doug

> 
> Ihar
> 
> On Wed, Feb 1, 2017 at 5:14 PM, Tony Breeds  wrote:
> > On Wed, Feb 01, 2017 at 07:49:21AM -0800, Ihar Hrachyshka wrote:
> >> On Wed, Feb 1, 2017 at 7:42 AM, Armando M.  wrote:
> >> >
> >> >
> >> > On 1 February 2017 at 07:29, Ihar Hrachyshka  wrote:
> >> >>
> >> >> Hi all,
> >> >>
> >> >> lately I see the requirements bot proposing new rebases for its
> >> >> patches (and bumping existing patch sets from the gate queue) every
> >> >> second hour, at least for Neutron [1], which makes it impossible to
> >> >> land the patches, and which only makes gate pre-release situation
> >> >> worse. On top of that, the proposed rebases don't really add anything
> >> >> material to the sync patch, no new version changes and such.
> >> >>
> >> >> I think we had some guards against such behavior before, so I wonder
> >> >> if they were broken or removed lately? Any plans to fix that?
> >> >>
> >> >> It would be nice to be able to land the update before RC1 is cut off,
> >> >> but at this point, it does not seem realistic.
> >> >>
> >> >> [1] https://review.openstack.org/#/c/423645/
> >> >>
> >> >
> >> > Let's stop merging until the bot proposal change lands. That ought to 
> >> > stop
> >> > the spurious rebase!
> >> >
> >> > Hard times require hard measures!!
> >>
> >> That's assuming it's triggered by neutron merges. It may as well be
> >> requirements merges that trigger rebases. Something that I believe
> >> requirements team will help us to understand.
> >
> > So it's a combination of both.  The bot runs are triggered on merges to
> > openstack/requirements but the rebases are (clearly) due to the fact that
> > neutron has advanced in the mean time.
> >
> > We knew this was a suboptimal part of the process but until your thread we
> > assumed it was wasting CPU/gate resources.  You've exposed the developer
> > cost and that elevates this in priority.
> >
> > We processes a few requirements feature freeze excetpions which is what 
> > caused
> > the issue you highlight in that review.  I'll see what I can do about 
> > finding
> > someone to fix this before we release Ocata.
> >
> > The "fix" will be to check if the bot run is a rebase *and* if the current
> > revision of the change is lacking a vote from jenkins.  Hmmm I guess we 
> > should
> > *also* check if the version in gerrit has +W and avoid the upload in that 
> > case
> > also.
> >
> >
> > Yours Tony.
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][stable][requirements] Modifying global-requirements to cap xstatic package versions

2017-03-02 Thread Doug Hellmann
Excerpts from Clark Boylan's message of 2017-03-02 10:40:39 -0800:
> On Wed, Mar 1, 2017, at 07:10 PM, Richard Jones wrote:
> > Hi folks,
> > 
> > We've run into some issues with various folks installing Horizon and
> > its dependencies using just requirements.txt which doesn't limit the
> > versions of xstatic packages beyond some minimum version. This is a
> > particular problem for releases prior to Ocata since those are not
> > compatible with the latest versions of some of the xstatic packages.
> > So, we believe what's necessary is to:
> > 
> > 1. Update current global-requirements.txt to pin the current released
> > version of each xstatic package. We don't update xstatic packages very
> > often, so keeping g-r in lock-step with upper-constraints.txt is
> > reasonable, I think.
> > 2. Update stable versions of global-requirements.txt to restrict them
> > to the versions we know are compatible based on the versions in
> > upper-constraints for the particular stable release.
> > 
> > 
> > Thoughts?
> 
> In the time before constraints we tried to manage our dependencies this
> way and it just resulted in different headaches. Things like not being
> able to pull in bug fixes because caps were too aggressive, needing to
> update multiple requirements files all at once due to differing deps
> lists in projects as caps got updated, and pip's dependency resolver not
> actually resolving the full tree in ways one might expect all caused
> problems.
> 
> We are currently seeing similar problems with the new PBR 2.0 release
> because PBR is/was capped in many places and is not able to be managed
> by constraints.
> 
> All this to say there are known downsides to doing it this way and
> constraints is the current solution for dealing with package deps more
> sanely. Would it be more effective to better educate folks about
> constraints and how it is useful?
> 
> Clark
> 

Right, friends don't let friends cap dependencies.

Let's work on getting constraints rolled out where needed instead.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][address-scope] Questions about l3 address scope

2017-03-02 Thread Kevin Benton
Address scopes allow traffic to go across a router without performing any
NAT. The rules you see there ensure that traffic isn't routed directly if
it crosses from one address scope to another.

On Wed, Mar 1, 2017 at 7:21 AM, zhi  wrote:

> Hi, all.
>
> I have some questions about l3 address scope in neutron.I hope that
> someone could give me some answers.
>
> I set up a devstack environment and it uses the feature of l3 address
> scope by following the document [1]. After doing those steps,  I can find
> some iptables rules in namespace, showing like this:
>
> root@devstack:~# iptables-save |grep neutron-l3-agent-scope
> :neutron-l3-agent-scope - [0:0]
> -A neutron-l3-agent-PREROUTING -j neutron-l3-agent-scope
> -A neutron-l3-agent-scope -i qr-6d393225-2e -j MARK --set-xmark
> 0x401/0x
> -A neutron-l3-agent-scope -i qr-d257abb8-e1 -j MARK --set-xmark
> 0x400/0x
> -A neutron-l3-agent-scope -i qg-f64c7892-1d -j MARK --set-xmark
> 0x401/0x
> :neutron-l3-agent-scope - [0:0]
> -A neutron-l3-agent-FORWARD -j neutron-l3-agent-scope
> -A neutron-l3-agent-scope -o qr-6d393225-2e -m mark ! --mark
> 0x401/0x -j DROP
> -A neutron-l3-agent-scope -o qr-d257abb8-e1 -m mark ! --mark
> 0x400/0x -j DROP
>
> What does these iptables rules used for ? In my opinion, by reading these
> rules, I can get some informations : any input traffic ( qr and qg devices
> ) will be marked and we only accept these marked traffic, isn't it?
>
> What the purpose of the l3 address scope?
>
> What can we benefit from l3 address scope?
>
>
> Thanks
> Zhi Chang
>
> [1]: https://docs.openstack.org/draft/networking-guide/
> config-address-scopes.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [picasso] First meeting on 7th of March

2017-03-02 Thread Paul Belanger
On Thu, Mar 02, 2017 at 05:41:24PM +, Derek Schultz wrote:
> Hi Emilien,
> 
> Thanks for the feedback! I'm aware that IRC is the standard for OpenStack
> folks, but at this current stage it's just easier to hold the discussion in
> Slack as Picasso ties into the IronFunctions open source project and
> important context would be lost if we were to maintain different chat
> platforms. That said, I think we can move Picasso from Slack to IRC in the
> future (once we prepare for the big tent).
> 
I agree with Emilien here, by using a proprietary platform you are potentially
alienation existing openstack developers from contributing to your project.
Yes IRC would be needed for big tent, but why start consuming IRC out of the
gate?

Are are already hosting code on git.openstack.org, seems like the next step
would be to move to IRC.

> Regards,
> Derek Schultz
> 
> On Wed, Mar 1, 2017 at 7:49 AM Emilien Macchi  wrote:
> 
> > On Tue, Feb 28, 2017 at 12:30 PM, Derek Schultz 
> > wrote:
> > > Hello all,
> > >
> > > The Picasso team will be running our first meeting next Tuesday. All
> > those
> > > interested in the project are welcome!
> > >
> > > For those of you not familiar with Picasso, it provides a platform for
> > > Functions as a Service (FaaS) on OpenStack [1].
> > >
> > > Tuesday, March 7th, 2017 Meeting Agenda:
> > > Starting at UTC 18:00
> > >
> > > 1. From Python to Go. (What Picasso needs from IronFunctions to implement
> > > multi-tenancy)
> > > 2. Blueprints [2]
> > > 3. Figure out best time slot for future meetings.
> > > 4. Roadmap discussion.
> > >
> > > How to join:
> > > http://slack.iron.io in the #openstack channel
> >
> > I would recommend using IRC for consistency with other projects.
> > Nothing forces you to do so, unless you plan to apply to the Big Tent.
> > My personal recommendation would be to use IRC so you can get more
> > visibility, since most of OpenStack folks are on IRC (and not always
> > on Slack).
> >
> > Good luck for your first meeting!
> >
> > > Etherpad:  https://etherpad.openstack.org/p/picasso-first-meeting
> > >
> > > [1] https://wiki.openstack.org/wiki/Picasso
> > > [2] https://blueprints.launchpad.net/picasso
> > >
> > >
> > >
> > >
> > __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> >
> >
> > --
> > Emilien Macchi
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Significant update to ARA (Ansible Run Analysis)

2017-03-02 Thread David Moreau Simard
The new 0.12.0 version was tagged and released to pypi, it'll land in
a mirror near you soon!

I'll write a blog post about the release soon, until then, the release
notes are here [1].

[1]: https://github.com/openstack/ara/releases/tag/0.12.0

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]


On Wed, Mar 1, 2017 at 11:08 PM, David Moreau Simard  wrote:
> Hi,
>
> Just a heads up for projects that are currently using ARA [1]: I've
> tagged and extensively tested a release candidate which contains a
> serious UI overhaul.
> At first glance, projects like devstack-gate [2], openstack-ansible
> [3], openstack-ansible-tests [4] and kolla-ansible [5] seem to be
> working well with this new version.
>
> I figured I'd send a notification prior to tagging the release due to
> the breadth of the UI changes, it's completely different and I wanted
> to hear if there were any concerns or issues before I went ahead.
>
> Please let me know if you have any questions or notice anything odd in
> the linked reports !
>
> [1]: https://github.com/openstack/ara
> [2]: 
> http://logs.openstack.org/66/439966/1/check/gate-grenade-dsvm-neutron-multinode-ubuntu-xenial/f7b078e/logs/ara/reports/index.html
> [3]: 
> http://logs.openstack.org/24/396324/9/check/gate-openstack-ansible-openstack-ansible-ceph-centos-7-nv/f811e1a/logs/ara/reports/index.html
> [4]: 
> http://logs.openstack.org/62/439962/1/check/gate-openstack-ansible-tests-ansible-func-centos-7/f9fc8c2/logs/ara/reports/index.html
> [5]: 
> http://logs.openstack.org/63/439963/1/check/gate-kolla-ansible-dsvm-deploy-centos-binary-centos-7-nv/a7243b8/logs/playbook_reports/reports/index.html
>
> David Moreau Simard
> Senior Software Engineer | Openstack RDO
>
> dmsimard = [irc, github, twitter]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][nova][neutron][cinder] Limiting RPC traffic with keystoneauth

2017-03-02 Thread Lance Bragstad
Post PTG there has been some discussion regarding quotas as well as limits.
While most of the discussion has been off and on in #openstack-dev, we also
have a mailing list thread on the topic [0].

I don't want to derail the thread on quotas and limits with this thread,
but today's discussion [1] highlighted an interesting optimization we could
make with keystoneauth and the service catalog. It seemed appropriate to
have it in it's own thread.

We were trying to figure out where to advertise limits from keystone for
quota calculations. The one spot we knew we didn't want it was the service
catalog or token body. Sean elaborated on the stuff that nova does with
context that filters the catalog to only contain certain things it assumes
other parts of nova might need later [2] before putting the token on the
message bus. From an RPC load perspective, this obviously better than
putting the *entire* token on the message bus, but could we take it one
step further? Couldn't we leverage keystone's GET /v3/auth/catalog/ API [3]
in keystoneauth to re-inflate the catalog in the services that need to make
calls to other services (i.e. nova-compute needing to talk to cinder or
neutron)?

I don't think we'd be reducing the number of things put on the queue, just
the overall size of the message. I wanted to start this thread to get the
idea in front of a wider audience, specifically projects that lean heavily
on RPC for inter-service communication. Most of the changes would be in
keystoneauth to do the needful if the token doesn't have a catalog. After
that, each service would have to identify if/where it does any filtering of
the service catalog before placing the token on the message bus.

Thoughts?


[0]
http://lists.openstack.org/pipermail/openstack-dev/2017-March/113099.html
[1]
http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2017-03-02.log.html#t2017-03-02T13:49:19
[2]
https://github.com/openstack/nova/blob/37cd9a961b065a07352b49ee72394cb210d8838b/nova/context.py#L102-L106
[3]
https://developer.openstack.org/api-ref/identity/v3/index.html?expanded=get-service-catalog-detail#authentication-and-token-management
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Horizon][stable][requirements] Modifying global-requirements to cap xstatic package versions

2017-03-02 Thread Clark Boylan
On Wed, Mar 1, 2017, at 07:10 PM, Richard Jones wrote:
> Hi folks,
> 
> We've run into some issues with various folks installing Horizon and
> its dependencies using just requirements.txt which doesn't limit the
> versions of xstatic packages beyond some minimum version. This is a
> particular problem for releases prior to Ocata since those are not
> compatible with the latest versions of some of the xstatic packages.
> So, we believe what's necessary is to:
> 
> 1. Update current global-requirements.txt to pin the current released
> version of each xstatic package. We don't update xstatic packages very
> often, so keeping g-r in lock-step with upper-constraints.txt is
> reasonable, I think.
> 2. Update stable versions of global-requirements.txt to restrict them
> to the versions we know are compatible based on the versions in
> upper-constraints for the particular stable release.
> 
> 
> Thoughts?

In the time before constraints we tried to manage our dependencies this
way and it just resulted in different headaches. Things like not being
able to pull in bug fixes because caps were too aggressive, needing to
update multiple requirements files all at once due to differing deps
lists in projects as caps got updated, and pip's dependency resolver not
actually resolving the full tree in ways one might expect all caused
problems.

We are currently seeing similar problems with the new PBR 2.0 release
because PBR is/was capped in many places and is not able to be managed
by constraints.

All this to say there are known downsides to doing it this way and
constraints is the current solution for dealing with package deps more
sanely. Would it be more effective to better educate folks about
constraints and how it is useful?

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][neutron] bot bumping patches off gate queue

2017-03-02 Thread Ihar Hrachyshka
Any updates on the problem? It still bumps patches off gate queue,
resetting it and backing off progress for the whole integrated gate.
Is the work followed up somewhere? Any patches to chew?

Ihar

On Wed, Feb 1, 2017 at 5:14 PM, Tony Breeds  wrote:
> On Wed, Feb 01, 2017 at 07:49:21AM -0800, Ihar Hrachyshka wrote:
>> On Wed, Feb 1, 2017 at 7:42 AM, Armando M.  wrote:
>> >
>> >
>> > On 1 February 2017 at 07:29, Ihar Hrachyshka  wrote:
>> >>
>> >> Hi all,
>> >>
>> >> lately I see the requirements bot proposing new rebases for its
>> >> patches (and bumping existing patch sets from the gate queue) every
>> >> second hour, at least for Neutron [1], which makes it impossible to
>> >> land the patches, and which only makes gate pre-release situation
>> >> worse. On top of that, the proposed rebases don't really add anything
>> >> material to the sync patch, no new version changes and such.
>> >>
>> >> I think we had some guards against such behavior before, so I wonder
>> >> if they were broken or removed lately? Any plans to fix that?
>> >>
>> >> It would be nice to be able to land the update before RC1 is cut off,
>> >> but at this point, it does not seem realistic.
>> >>
>> >> [1] https://review.openstack.org/#/c/423645/
>> >>
>> >
>> > Let's stop merging until the bot proposal change lands. That ought to stop
>> > the spurious rebase!
>> >
>> > Hard times require hard measures!!
>>
>> That's assuming it's triggered by neutron merges. It may as well be
>> requirements merges that trigger rebases. Something that I believe
>> requirements team will help us to understand.
>
> So it's a combination of both.  The bot runs are triggered on merges to
> openstack/requirements but the rebases are (clearly) due to the fact that
> neutron has advanced in the mean time.
>
> We knew this was a suboptimal part of the process but until your thread we
> assumed it was wasting CPU/gate resources.  You've exposed the developer
> cost and that elevates this in priority.
>
> We processes a few requirements feature freeze excetpions which is what caused
> the issue you highlight in that review.  I'll see what I can do about finding
> someone to fix this before we release Ocata.
>
> The "fix" will be to check if the bot run is a rebase *and* if the current
> revision of the change is lacking a vote from jenkins.  Hmmm I guess we should
> *also* check if the version in gerrit has +W and avoid the upload in that case
> also.
>
>
> Yours Tony.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] state of the stable/mitaka branches

2017-03-02 Thread Ihar Hrachyshka
On Thu, Mar 2, 2017 at 8:13 AM, Pavlo Shchelokovskyy
 wrote:
> I'm also kind of wondering what the grenade job in stable/newton will test
> after mitaka EOL? upgrade from mitaka-eol tag to stable/newton branch? Then
> even that might be affected if devstack-gate + project config will not be
> able to set *_ssh in enabled drivers while grenade will try to use them.

When a branch is EOLed, grenade jobs using it for old side of the
cloud are deprovisioned.

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [picasso] First meeting on 7th of March

2017-03-02 Thread Derek Schultz
Hi Emilien,

Thanks for the feedback! I'm aware that IRC is the standard for OpenStack
folks, but at this current stage it's just easier to hold the discussion in
Slack as Picasso ties into the IronFunctions open source project and
important context would be lost if we were to maintain different chat
platforms. That said, I think we can move Picasso from Slack to IRC in the
future (once we prepare for the big tent).

Regards,
Derek Schultz

On Wed, Mar 1, 2017 at 7:49 AM Emilien Macchi  wrote:

> On Tue, Feb 28, 2017 at 12:30 PM, Derek Schultz 
> wrote:
> > Hello all,
> >
> > The Picasso team will be running our first meeting next Tuesday. All
> those
> > interested in the project are welcome!
> >
> > For those of you not familiar with Picasso, it provides a platform for
> > Functions as a Service (FaaS) on OpenStack [1].
> >
> > Tuesday, March 7th, 2017 Meeting Agenda:
> > Starting at UTC 18:00
> >
> > 1. From Python to Go. (What Picasso needs from IronFunctions to implement
> > multi-tenancy)
> > 2. Blueprints [2]
> > 3. Figure out best time slot for future meetings.
> > 4. Roadmap discussion.
> >
> > How to join:
> > http://slack.iron.io in the #openstack channel
>
> I would recommend using IRC for consistency with other projects.
> Nothing forces you to do so, unless you plan to apply to the Big Tent.
> My personal recommendation would be to use IRC so you can get more
> visibility, since most of OpenStack folks are on IRC (and not always
> on Slack).
>
> Good luck for your first meeting!
>
> > Etherpad:  https://etherpad.openstack.org/p/picasso-first-meeting
> >
> > [1] https://wiki.openstack.org/wiki/Picasso
> > [2] https://blueprints.launchpad.net/picasso
> >
> >
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] - Added Gary and Russell Boden to release team

2017-03-02 Thread Kevin Benton
Hi everyone,

I added Gary Kotton and Russell Boden to the release team to help with
reviews for patches to unblock stadium projects as we go through and
pull out deprecated things in Neutron and adjust gate jobs, etc.

Tag the release team (neutron-release) on any critical reviews to
quickly unblock stadium projects impacted by devstack changes, neutron
deprecated item removals, etc.

Cheers,
Kevin Benton

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] [docs][release][ptl] Adding docs to the release schedule

2017-03-02 Thread Alexandra Settle


On 3/2/17, 4:42 PM, "Doug Hellmann"  wrote:

Excerpts from Alexandra Settle's message of 2017-03-02 16:25:46 +:
> 
> 
> On 3/2/17, 4:08 PM, "Doug Hellmann"  wrote:
> 
> Excerpts from Alexandra Settle's message of 2017-03-02 14:29:07 +:
> > 
> > 
> > From: Anne Gentle 
> > Date: Thursday, March 2, 2017 at 2:16 PM
> > To: Alexandra Settle 
> > Cc: "OpenStack Development Mailing List (not for usage questions)" 
, "openstack-d...@lists.openstack.org" 

> > Subject: Re: [OpenStack-docs] [docs][release][ptl] Adding docs to 
the release schedule
> > 
> > 
> > 
> > On Wed, Mar 1, 2017 at 11:52 AM, Alexandra Settle 
> wrote:
> > Hi everyone,
> > 
> > I would like to propose that we introduce a “Review documentation” 
period on the release schedule.
> > 
> > We would formulate it as a deadline, so that it fits in the 
schedule and making it coincide with the RC1 deadline.
> > 
> > For projects that are not following the milestones, we would 
translate this new inclusion literally, so if you would like your project to be 
documented at docs.o.o, then doc must be introduced and reviewed one month 
before the branch is cut.
> > 
> > I like this idea, and it can align certain docs with string freeze 
logically.
> > 
> > I think the docs that are governed with this set of rules should be 
scoped only to those that are synched with a release, namely the Configuration 
Reference, Networking Guide, and Install Guides. [1]
> > 
> > For reference, those are the guides that would best align with 
"common cycle with development milestones." [2]
> > 
> > Scope this proposal to the released guides, clarify which repo 
those will be in, who can review and merge, and precisely when the cutoff is, 
and you're onto something here. Plus, I can hear the translation teams 
cheering. :)
> > 
> > 
> > I completely agree with everything here :) my only question is, 
what do you mean by “clarify which repo those will be in”? I had no intention 
of moving documentation with this suggestion Install guides either in 
openstack-manuals or their own $project repos :)
> > 
> > Next question – since there doesn’t appear to be a huge ‘no don’t 
do the thing’ coming from the dev list at this point, how and where do we 
include this new release information? Here? 
https://docs.openstack.org/project-team-guide/release-management.html#release-1
> > 
> > Anne
> > 
> > 
> > 1. 
https://docs.openstack.org/contributor-guide/blueprints-and-specs.html#release-specific-documentation
> > 
> > 2. 
https://docs.openstack.org/project-team-guide/release-management.html#common-cycle-with-development-milestones
> > 
> > 
> > In the last week since we released Ocata, it has become 
increasingly apparent that the documentation was not updated from the 
development side. We were not aware of a lot of new enhancements, features, or 
major bug fixes for certain projects. This means we have released with 
incorrect/out-of-date documentation. This is not only an unfortunately bad 
reflection on our team, but on the project teams themselves.
> > 
> > The new inclusion to the schedule may seem unnecessary, but a lot 
of people rely on this and the PTL drives milestones from this schedule.
> > 
> > From our side, I endeavor to ensure our release managers are 
working harder to ping and remind doc liaisons and PTLs to ensure the 
documentation is appropriately updated and working to ensure this does not 
happen in the future.
> > 
> > Thanks,
> > 
> > Alex
> > 
> 
> As Thierry pointed out, we do need to consider the fact that more
> projects are using the cycle-with-intermediary process, so although
> we might tie dates to milestones we need to be careful that projects
> not tagging milestones are still covered in any processes.
> 
> Based on a similar discussion we had with the i18n team at the PTG,
> I think a good first step here is to document the agreement by
> writing a governance tag with a name like doc:managed. The tag
> description is the place to write down the answers to the questions
> from this thread.
> 
> For example, it would list the manuals that are in scope, what
> portion of the work the docs team will take on (initial writing?
> reviews?), and what portion of the work the project team needs 

Re: [openstack-dev] [OpenStack-docs] [docs][release][ptl] Adding docs to the release schedule

2017-03-02 Thread Doug Hellmann
Excerpts from Alexandra Settle's message of 2017-03-02 16:25:46 +:
> 
> 
> On 3/2/17, 4:08 PM, "Doug Hellmann"  wrote:
> 
> Excerpts from Alexandra Settle's message of 2017-03-02 14:29:07 +:
> > 
> > 
> > From: Anne Gentle 
> > Date: Thursday, March 2, 2017 at 2:16 PM
> > To: Alexandra Settle 
> > Cc: "OpenStack Development Mailing List (not for usage questions)" 
> , "openstack-d...@lists.openstack.org" 
> 
> > Subject: Re: [OpenStack-docs] [docs][release][ptl] Adding docs to the 
> release schedule
> > 
> > 
> > 
> > On Wed, Mar 1, 2017 at 11:52 AM, Alexandra Settle 
> > wrote:
> > Hi everyone,
> > 
> > I would like to propose that we introduce a “Review documentation” 
> period on the release schedule.
> > 
> > We would formulate it as a deadline, so that it fits in the schedule 
> and making it coincide with the RC1 deadline.
> > 
> > For projects that are not following the milestones, we would translate 
> this new inclusion literally, so if you would like your project to be 
> documented at docs.o.o, then doc must be introduced and reviewed one month 
> before the branch is cut.
> > 
> > I like this idea, and it can align certain docs with string freeze 
> logically.
> > 
> > I think the docs that are governed with this set of rules should be 
> scoped only to those that are synched with a release, namely the 
> Configuration Reference, Networking Guide, and Install Guides. [1]
> > 
> > For reference, those are the guides that would best align with "common 
> cycle with development milestones." [2]
> > 
> > Scope this proposal to the released guides, clarify which repo those 
> will be in, who can review and merge, and precisely when the cutoff is, and 
> you're onto something here. Plus, I can hear the translation teams cheering. 
> :)
> > 
> > 
> > I completely agree with everything here :) my only question is, what do 
> you mean by “clarify which repo those will be in”? I had no intention of 
> moving documentation with this suggestion Install guides either in 
> openstack-manuals or their own $project repos :)
> > 
> > Next question – since there doesn’t appear to be a huge ‘no don’t do 
> the thing’ coming from the dev list at this point, how and where do we 
> include this new release information? Here? 
> https://docs.openstack.org/project-team-guide/release-management.html#release-1
> > 
> > Anne
> > 
> > 
> > 1. 
> https://docs.openstack.org/contributor-guide/blueprints-and-specs.html#release-specific-documentation
> > 
> > 2. 
> https://docs.openstack.org/project-team-guide/release-management.html#common-cycle-with-development-milestones
> > 
> > 
> > In the last week since we released Ocata, it has become increasingly 
> apparent that the documentation was not updated from the development side. We 
> were not aware of a lot of new enhancements, features, or major bug fixes for 
> certain projects. This means we have released with incorrect/out-of-date 
> documentation. This is not only an unfortunately bad reflection on our team, 
> but on the project teams themselves.
> > 
> > The new inclusion to the schedule may seem unnecessary, but a lot of 
> people rely on this and the PTL drives milestones from this schedule.
> > 
> > From our side, I endeavor to ensure our release managers are working 
> harder to ping and remind doc liaisons and PTLs to ensure the documentation 
> is appropriately updated and working to ensure this does not happen in the 
> future.
> > 
> > Thanks,
> > 
> > Alex
> > 
> 
> As Thierry pointed out, we do need to consider the fact that more
> projects are using the cycle-with-intermediary process, so although
> we might tie dates to milestones we need to be careful that projects
> not tagging milestones are still covered in any processes.
> 
> Based on a similar discussion we had with the i18n team at the PTG,
> I think a good first step here is to document the agreement by
> writing a governance tag with a name like doc:managed. The tag
> description is the place to write down the answers to the questions
> from this thread.
> 
> For example, it would list the manuals that are in scope, what
> portion of the work the docs team will take on (initial writing?
> reviews?), and what portion of the work the project team needs to
> provide (contributing updates when major related happen in the code,
> having a liaison, and a "checkup" at a date specified near the end
> of the cycle). If there are any constraints about which projects
> can apply, those should be documented, too. Maybe 

Re: [openstack-dev] [keystone][defcore][refstack] Removal of the v2.0 API

2017-03-02 Thread Mark Voelker


> On Mar 1, 2017, at 6:01 PM, Rodrigo Duarte  wrote:
> 
> On Wed, Mar 1, 2017 at 7:10 PM, Lance Bragstad  wrote:
> During the PTG, Morgan mentioned that there was the possibility of keystone 
> removing the v2.0 API [0]. This thread is a follow up from that discussion to 
> make sure we loop in the right people and do everything by the books.
> 
> The result of the session [1] listed the following work items: 
> - Figure out how we can test the removal and make the job voting (does the 
> v3-only job count for this)?
> 
> We have two v3-only jobs, one only runs keystone's tempest plugin tests - 
> which are specific to federation (it configures a federated environment using 
> mod_shib) - and another one (non-voting) that runs tempest, I believe the 
> later can be a good way to initially validate the v2.0 removal.
>  
> - Reach out to defcore and refstack communities about removing v2.0 (which is 
> partially what this thread is doing)

Yup, we actually talked a bit about this in the past couple of weeks.  I’ve 
CC'd Luz who is playing point on capabilities scoring for the 2017.08 Guideline 
for Identity to make extra sure she’s aware. =)

At Your Service,

Mark T. Voelker
InteropWG Co-chair

> 
> Outside of this thread, what else do we have to do from a defcore perspective 
> to make this happen?
> 
> Thanks for the time!
> 
> [0] https://review.openstack.org/#/c/437667/
> [1] https://etherpad.openstack.org/p/pike-ptg-keystone-deprecations
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> -- 
> Rodrigo Duarte Sousa
> Senior Quality Engineer @ Red Hat
> MSc in Computer Science
> http://rodrigods.com
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-wg/news

2017-03-02 Thread Chris Dent


Greetings OpenStack community,

Only two of us in attendance at today's API-WG meeting. We mostly reflected on 
the work that needs to be done to summarize activity at the PTG last week. 
Despite not having a room of our own until the last minute, on Monday and 
Tuesday we had a large gathering with lots of great input. We continued using 
the archiecture working groups etherpad [0] for planning and used three other 
topical etherpads:

* stability and compatibility guidelines:
  https://etherpad.openstack.org/p/api-stability-guidelines
* capabilities discovery:
  https://etherpad.openstack.org/p/capabilities-pike
* service catalog and service types:
  https://etherpad.openstack.org/p/service-catalog-pike

In the next 24 hours or so there will be an API-WG PTG recap posted to the 
os-dev list.

# Newly Published Guidelines

Nothing new this week.

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community. Nothing at 
the moment, but feel free to get in early on the reviews below.

# Guidelines Currently Under Review [3]

* Define pagination guidelines
  https://review.openstack.org/#/c/390973/

* Add API capabilities discovery guideline
  https://review.openstack.org/#/c/386555/

* Refactor and re-validate api change guidelines
  https://review.openstack.org/#/c/421846/

# Highlighting your API impacting issues

If you seek further review and insight from the API WG, please address your concerns in 
an email to the OpenStack developer mailing list[1] with the tag "[api]" in the 
subject. In your email, you should include any relevant reviews, links, and comments to 
help guide the discussion of the specific challenge you are facing.

To learn more about the API WG mission and the work we do, see OpenStack API 
Working Group [2].

Thanks for reading and see you next week!

# References

[0] https://etherpad.openstack.org/p/ptg-architecture-workgroup
[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_wg/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PBR 2.0.0 release *may* cause gate failures

2017-03-02 Thread Doug Hellmann
Excerpts from Tony Breeds's message of 2017-03-02 19:41:17 +1100:
> On Thu, Mar 02, 2017 at 06:24:32PM +1100, Tony Breeds wrote:
> 
> > I know I'm talking to myself .
> 
> Still.
> 
> > A project on $branch without constraints is going to get pbr 2.0.0 and then 
> > hit
> > version conflicts with projects that have pbr <2.0.0 caps *anyway* 
> > regardless
> > of what hacking says right?
> > 
> > So removing the pbr cap in hacking doesn't make things worse for stable
> > branches but it does make things better for master?
> 
> I think the 0.10.3 hacking release is the best way forward but in the mean 
> time
> a quick project by project fix is to just update hacking.
> 
> See Ian's patch at: https://review.openstack.org/#/c/440010/
> 
> I'm a little unwilling to create all the updates but I make look at that 
> after dinner.
> 
> Yours Tony.

Rico Lin was very busy:
https://review.openstack.org/#/q/status:open+topic:bug/1668848

Let's see if we can get those approved and avoid a patch release of
hacking.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] state of the stable/mitaka branches

2017-03-02 Thread Pavlo Shchelokovskyy
Hi Dmitry,

> I'm not sure why removing of *_ssh drivers from master should
> necessary break stable/mitaka, where these drivers are present. Could
> you elaborate?

My main concern is the following: both project-config and devstack-gate are
branch-less. When removing the *_ssh drivers from tree, we have to remove
them from 'enabled_drivers' list in ironic.conf too (so that conductor
would not fail to start). Most our jobs are not setting enabled_drivers
themselves, but use whatever devstack-gate is setting. Currently it enables
either pxe_ssh+pxe_ipmitool, or agent_ssh+agent_ipmitool (depending on
deploy driver starting with 'agent' or not). If we drop *_ssh from
enabled_drivers in devstack-gate, then the jobs that actually use *_ssh
will fail instead.

AFAIU this affects only mitaka branch, so a possible way of moving forward
is actually let it EOL and then continue with *_ssh drivers removal.

I'm also kind of wondering what the grenade job in stable/newton will test
after mitaka EOL? upgrade from mitaka-eol tag to stable/newton branch? Then
even that might be affected if devstack-gate + project config will not be
able to set *_ssh in enabled drivers while grenade will try to use them.

On Thu, Mar 2, 2017 at 12:39 PM, Dmitry Tantsur  wrote:

> On 03/01/2017 08:19 PM, Jay Faulkner wrote:
>
>>
>> On Mar 1, 2017, at 11:15 AM, Pavlo Shchelokovskyy <
>>> pshchelokovs...@mirantis.com> wrote:
>>>
>>> Greetings ironicers,
>>>
>>> I'd like to discuss the state of the gates in ironic and other related
>>> projects for stable/mitaka branch.
>>>
>>> Today while making some test patches to old branches I discovered the
>>> following problems:
>>>
>>> python-ironicclient/stable/mitaka
>>> All unit-test-like jobs are broken due to not handling upper
>>> constraints. Because of it a newer than actually supported
>>> python-openstackclient is installed, which already lacks some modules
>>> python-ironicclient tries to import (these were moved to osc-lib).
>>> I've proposed a patch that copies current way of dealing with upper
>>> constraints in tox envs [0], gates are passing.
>>>
>>> ironic/stable/mitaka
>>> While not actually being gated on, using virtualbmc+ipmitool drivers is
>>> broken. The reason is again related to upper constraints as what happens is
>>> old enough version of pyghmi (from mitaka upper constraints) is installed
>>> with most recent virtualbmc (not in upper constraints), and those versions
>>> are incompatible.
>>> This highlights a question whether we should propose virtualbmc to upper
>>> constraints too to avoid such problems in the future.
>>> Meanwhile a quick fix would be to hard-code the supported virtualbmc
>>> version in the ironic's devstack plugin for mitaka release.
>>> Although not strictly supported for Mitaka release, I'd like that
>>> functionality to be working on stable/mitaka gates to test for upcoming
>>> removal of *_ssh drivers.
>>>
>>> I did not test other projects yet.
>>>
>>>
>> I can attest jobs are broken for stable/mitaka on ironic-lib as well —
>> our jobs build docs unconditionally, and ironic-lib had no docs in Mitaka.
>>
>
> Oh, fun.
>
> Well, the docs job is easy to exclude. But we seem to have
> virtualbmc-based jobs there. As I already wrote in another message, I'm not
> even sure they're supposed to work..


Yes, the ironic-lib:mitaka is kind of completely broken currently :( There
are several pieces that need to be fixed to get it back to healthy state
(even if only to leave it healthy for EOL). Luckily (or not?), most of the
fixes go thru other projects, so it is doable without one giant squashed
commit:

- docs job - need to be disabled for ironic-lib/mitaka in project-config

- gate-tempest-dsvm-ironic-lib-* - two of them are gating, so they must be
fixed. they are affected by that very virtualbmc+pyghmi incompatibility.
The fix would be to
1) introduce virtualbmc to requirements:master and cherry-pick that (with
version changes) all the way down to mitaka. patch to master is already
here [0]
2) make a patch to ironic's devstack plugin master to install virtualbmc
minding u-c (3 chars fix :) ) and cherry-pick that all the way down to
mitaka as well (test patch to mitaka is here [1], will have to be replaced
with proper cherry-pick from master)
This is already kind of tested with this layered depends-on patch [2] -
those ipmitool jobs are passing

- ironic-lib-coverage-* - in ironic-lib:mitaka tox jobs also do not use
u-c, but it seems it is not the reason of failures here, as the current
"tox -recover" does not pass neither with u-c from mitaka, nor without
them. I have a fix, which is again single character :) (wondering how it
was working before :) ), and could be probably introduced together with a
cherry-pick for u-c handling. But for it to merge, all the other jobs must
be fixed beforehand.

[0] https://review.openstack.org/#/c/440348/
[1] https://review.openstack.org/#/c/440559/
[2] https://review.openstack.org/#/c/440562/


Re: [openstack-dev] [OpenStack-docs] [docs][release][ptl] Adding docs to the release schedule

2017-03-02 Thread Alexandra Settle


On 3/2/17, 4:08 PM, "Doug Hellmann"  wrote:

Excerpts from Alexandra Settle's message of 2017-03-02 14:29:07 +:
> 
> 
> From: Anne Gentle 
> Date: Thursday, March 2, 2017 at 2:16 PM
> To: Alexandra Settle 
> Cc: "OpenStack Development Mailing List (not for usage questions)" 
, "openstack-d...@lists.openstack.org" 

> Subject: Re: [OpenStack-docs] [docs][release][ptl] Adding docs to the 
release schedule
> 
> 
> 
> On Wed, Mar 1, 2017 at 11:52 AM, Alexandra Settle 
> wrote:
> Hi everyone,
> 
> I would like to propose that we introduce a “Review documentation” period 
on the release schedule.
> 
> We would formulate it as a deadline, so that it fits in the schedule and 
making it coincide with the RC1 deadline.
> 
> For projects that are not following the milestones, we would translate 
this new inclusion literally, so if you would like your project to be 
documented at docs.o.o, then doc must be introduced and reviewed one month 
before the branch is cut.
> 
> I like this idea, and it can align certain docs with string freeze 
logically.
> 
> I think the docs that are governed with this set of rules should be 
scoped only to those that are synched with a release, namely the Configuration 
Reference, Networking Guide, and Install Guides. [1]
> 
> For reference, those are the guides that would best align with "common 
cycle with development milestones." [2]
> 
> Scope this proposal to the released guides, clarify which repo those will 
be in, who can review and merge, and precisely when the cutoff is, and you're 
onto something here. Plus, I can hear the translation teams cheering. :)
> 
> 
> I completely agree with everything here :) my only question is, what do 
you mean by “clarify which repo those will be in”? I had no intention of moving 
documentation with this suggestion Install guides either in openstack-manuals 
or their own $project repos :)
> 
> Next question – since there doesn’t appear to be a huge ‘no don’t do the 
thing’ coming from the dev list at this point, how and where do we include this 
new release information? Here? 
https://docs.openstack.org/project-team-guide/release-management.html#release-1
> 
> Anne
> 
> 
> 1. 
https://docs.openstack.org/contributor-guide/blueprints-and-specs.html#release-specific-documentation
> 
> 2. 
https://docs.openstack.org/project-team-guide/release-management.html#common-cycle-with-development-milestones
> 
> 
> In the last week since we released Ocata, it has become increasingly 
apparent that the documentation was not updated from the development side. We 
were not aware of a lot of new enhancements, features, or major bug fixes for 
certain projects. This means we have released with incorrect/out-of-date 
documentation. This is not only an unfortunately bad reflection on our team, 
but on the project teams themselves.
> 
> The new inclusion to the schedule may seem unnecessary, but a lot of 
people rely on this and the PTL drives milestones from this schedule.
> 
> From our side, I endeavor to ensure our release managers are working 
harder to ping and remind doc liaisons and PTLs to ensure the documentation is 
appropriately updated and working to ensure this does not happen in the future.
> 
> Thanks,
> 
> Alex
> 

As Thierry pointed out, we do need to consider the fact that more
projects are using the cycle-with-intermediary process, so although
we might tie dates to milestones we need to be careful that projects
not tagging milestones are still covered in any processes.

Based on a similar discussion we had with the i18n team at the PTG,
I think a good first step here is to document the agreement by
writing a governance tag with a name like doc:managed. The tag
description is the place to write down the answers to the questions
from this thread.

For example, it would list the manuals that are in scope, what
portion of the work the docs team will take on (initial writing?
reviews?), and what portion of the work the project team needs to
provide (contributing updates when major related happen in the code,
having a liaison, and a "checkup" at a date specified near the end
of the cycle). If there are any constraints about which projects
can apply, those should be documented, too. Maybe "independent"
projects (not following the release cycle) are not candidates, for
example.

The tag application process section should cover who can propose a
tag, and who needs to approve it. In this case, I would think the
project team PTL and docs PTL should both agree, 

Re: [openstack-dev] [kolla][infra] does someone cares about Jenkins? I stopped.

2017-03-02 Thread Paul Belanger
On Thu, Mar 02, 2017 at 11:00:26AM -0500, Clay Gerrard wrote:
> On Thu, Mar 2, 2017 at 10:41 AM, Paul Belanger 
> wrote:
> 
> >  In fact, the openstack-infra team does mirror a
> > lot of things today
> 
> 
> I bumped into this the other day:
> 
> https://specs.openstack.org/openstack-infra/infra-specs/specs/unified_mirrors.html
> 
> ... but so far haven't found any specific details on ci.openstack.org?
> Obviously the . stuff obviously got implemented [1] - I'm
> not sure what extent you have to "opt-in" to that in gate jobs?
> 
Our entry point today for configuring worker nodes is the configure_mirror.sh[2]
for nodepool. For devstack based jobs, almost everything should be in our AFS
mirrors. Other jobs, I would agree things are likely missing.

[2] 
http://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool/scripts/configure_mirror.sh

> Did someone did say AFS?
> 
> https://docs.openstack.org/infra/system-config/afs.html?highlight=mirror
> 
> -Clay
> 
> 1. http://mirror.iad.rax.openstack.org/

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][telemetry] gate breakage

2017-03-02 Thread Matt Riedemann

On 3/2/2017 8:29 AM, Matt Riedemann wrote:

On 3/2/2017 8:14 AM, Mehdi Abaakouk wrote:

Example of failure:

Our test assertion (that does a GET /v2.1/servers/detail HTTP/1.1) that
returns [] instead of the list of instances

http://logs.openstack.org/56/439156/2/check/gate-telemetry-dsvm-integration-gnocchi-ubuntu-xenial/d4a6c69/console.html#_2017-03-02_09_28_59_334619



The 'openstack server list' we issue for debugging that have the same
issue:

http://logs.openstack.org/56/439156/2/check/gate-telemetry-dsvm-integration-gnocchi-ubuntu-xenial/d4a6c69/console.html#_2017-03-02_09_29_13_803796



On Thu, Mar 02, 2017 at 02:52:20PM +0100, Mehdi Abaakouk wrote:

Hello,

We are experiencing an blocking issue with our integrated job since some
days.

Basically the job creates a heat stack and call nova API to list
instances and see if the stack have upscaled.

The autoscaling itself work well, but our test assertion fails because
listing nova instances doesn't works anymore. It always returns an empty
list.

I have first though https://review.openstack.org/#/c/427392/ will fix
it, because nova-api was logging some errors about cell initialisations.

But now theses errors are gone, and 'openstack server list' still
returns an empty list, while 'openstack server show X' works well.

Any ideas are welcome.

Cheers,
--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht




According to logstash it looks like the regression started around 2/28
so we should look for nova changes that might be related.



Tracking this under bug:

https://bugs.launchpad.net/nova/+bug/1669473

The details are in the bug. We've figured out the root cause and I've 
got a workaround patch up in nova and Mehdi has a workaround patch up in 
devstack, and we're testing the nova workaround here:


https://review.openstack.org/#/c/440657/

--

Thanks,

Matt Riedemann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment][snaps][ansible][puppet][charms] OpenStack snaps delivery strategy

2017-03-02 Thread Corey Bryant
On Thu, Mar 2, 2017 at 10:57 AM, Jesse Pretorius <
jesse.pretor...@rackspace.co.uk> wrote:

> Adding the [deployment] tag as the catch-all for deployment projects as I
> think they’d be interested.
>
>
>
>
>
Ah thanks Jesse, appreciate that.  I didn't realize that tag existed.

Corey
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] [docs][release][ptl] Adding docs to the release schedule

2017-03-02 Thread Doug Hellmann
Excerpts from Alexandra Settle's message of 2017-03-02 14:29:07 +:
> 
> 
> From: Anne Gentle 
> Date: Thursday, March 2, 2017 at 2:16 PM
> To: Alexandra Settle 
> Cc: "OpenStack Development Mailing List (not for usage questions)" 
> , "openstack-d...@lists.openstack.org" 
> 
> Subject: Re: [OpenStack-docs] [docs][release][ptl] Adding docs to the release 
> schedule
> 
> 
> 
> On Wed, Mar 1, 2017 at 11:52 AM, Alexandra Settle 
> > wrote:
> Hi everyone,
> 
> I would like to propose that we introduce a “Review documentation” period on 
> the release schedule.
> 
> We would formulate it as a deadline, so that it fits in the schedule and 
> making it coincide with the RC1 deadline.
> 
> For projects that are not following the milestones, we would translate this 
> new inclusion literally, so if you would like your project to be documented 
> at docs.o.o, then doc must be introduced and reviewed one month before the 
> branch is cut.
> 
> I like this idea, and it can align certain docs with string freeze logically.
> 
> I think the docs that are governed with this set of rules should be scoped 
> only to those that are synched with a release, namely the Configuration 
> Reference, Networking Guide, and Install Guides. [1]
> 
> For reference, those are the guides that would best align with "common cycle 
> with development milestones." [2]
> 
> Scope this proposal to the released guides, clarify which repo those will be 
> in, who can review and merge, and precisely when the cutoff is, and you're 
> onto something here. Plus, I can hear the translation teams cheering. :)
> 
> 
> I completely agree with everything here :) my only question is, what do you 
> mean by “clarify which repo those will be in”? I had no intention of moving 
> documentation with this suggestion Install guides either in openstack-manuals 
> or their own $project repos :)
> 
> Next question – since there doesn’t appear to be a huge ‘no don’t do the 
> thing’ coming from the dev list at this point, how and where do we include 
> this new release information? Here? 
> https://docs.openstack.org/project-team-guide/release-management.html#release-1
> 
> Anne
> 
> 
> 1. 
> https://docs.openstack.org/contributor-guide/blueprints-and-specs.html#release-specific-documentation
> 
> 2. 
> https://docs.openstack.org/project-team-guide/release-management.html#common-cycle-with-development-milestones
> 
> 
> In the last week since we released Ocata, it has become increasingly apparent 
> that the documentation was not updated from the development side. We were not 
> aware of a lot of new enhancements, features, or major bug fixes for certain 
> projects. This means we have released with incorrect/out-of-date 
> documentation. This is not only an unfortunately bad reflection on our team, 
> but on the project teams themselves.
> 
> The new inclusion to the schedule may seem unnecessary, but a lot of people 
> rely on this and the PTL drives milestones from this schedule.
> 
> From our side, I endeavor to ensure our release managers are working harder 
> to ping and remind doc liaisons and PTLs to ensure the documentation is 
> appropriately updated and working to ensure this does not happen in the 
> future.
> 
> Thanks,
> 
> Alex
> 

As Thierry pointed out, we do need to consider the fact that more
projects are using the cycle-with-intermediary process, so although
we might tie dates to milestones we need to be careful that projects
not tagging milestones are still covered in any processes.

Based on a similar discussion we had with the i18n team at the PTG,
I think a good first step here is to document the agreement by
writing a governance tag with a name like doc:managed. The tag
description is the place to write down the answers to the questions
from this thread.

For example, it would list the manuals that are in scope, what
portion of the work the docs team will take on (initial writing?
reviews?), and what portion of the work the project team needs to
provide (contributing updates when major related happen in the code,
having a liaison, and a "checkup" at a date specified near the end
of the cycle). If there are any constraints about which projects
can apply, those should be documented, too. Maybe "independent"
projects (not following the release cycle) are not candidates, for
example.

The tag application process section should cover who can propose a
tag, and who needs to approve it. In this case, I would think the
project team PTL and docs PTL should both agree, after having the
conversation to ensure there is full understanding about the
expectations. It sounds a bit formal, but it shouldn't be a long
conversation in most cases and the structured process will help
reduce miscommunication.

After the tag is documented, the release team can add any dates to
the schedule 

Re: [openstack-dev] [cross-project][nova][cinder][designate][neutron] Common support-matrix.py

2017-03-02 Thread Morales, Victor
I got this link[11] from Ankur, apparently Nova and Neutron has already started 
a common effort

[11] https://review.openstack.org/#/c/330027/

Regards,
Victor Morales
irc: electrocucaracha





On 3/1/17, 5:53 PM, "Mike Perez"  wrote:

>Hey all,
>
>I kicked off a thread [1] to start talking about approaches with improving
>vendor discoveribility by improving our Market Place [2]. In order to improve
>our market place, having the projects be more part of the process would allow
>the information to be more accurate of what vendors have good support in the
>respected service.
>
>It was discovered that we have a common solution of using INI files and parsing
>that with a common support-matrix.py script that originated out of nova [3].
>I would like to propose we push this into some common sphinx extension project.
>Are there any suggestions of where that could live?
>
>I've look at how Nova [3][4], Neutron [5][6] and Designate [7][8] are doing
>this today. Nova and Neutron are pretty close, and Designate is a much more
>simplified version. Cinder [9][10] is not using INI files, but instead going
>off the driver classes themselves. Are there any other projects I'm missing?
>
>Cinder and Designate have drivers per row, as oppose to Nova and Neutron
>which have features per row. This makes sense given the difference in drivers
>versus features?
>
>I'm assuming the Designate matrix is saying every driver supports every feature
>in its API? If so, that's so awesome and makes me happy.
>
>I would like to start brainstorming on how we can converge on a common matrix
>table design so things are a bit more consistent and easier for a common
>parsing tool.
>
>
>[1] - 
>http://lists.openstack.org/pipermail/openstack-dev/2017-January/110151.html
>[2] - https://www.openstack.org/marketplace/drivers/
>[3] - 
>https://docs.openstack.org/developer/nova/support-matrix.html#operation_maintenance_mode
>[4] - 
>http://git.openstack.org/cgit/openstack/nova/tree/doc/ext/support_matrix.py
>[5] - https://review.openstack.org/#/c/318192/76
>[6] - 
>http://docs-draft.openstack.org/92/318192/76/check/gate-neutron-docs-ubuntu-xenial/48cdeb7//doc/build/html/feature_classification/general_feature_support_matrix.html
>[7] - 
>https://git.openstack.org/cgit/openstack/designate/tree/doc/ext/support_matrix.py
>[8] - https://docs.openstack.org/developer/designate/support-matrix.html
>[9] - https://review.openstack.org/#/c/371169/15
>[10] - 
>http://docs-draft.openstack.org/69/371169/15/check/gate-cinder-docs-ubuntu-xenial/aa1bdb1//doc/build/html/driver_support_matrix.html
>
>-- 
>Mike Perez
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][infra] does someone cares about Jenkins? I stopped.

2017-03-02 Thread Clay Gerrard
On Thu, Mar 2, 2017 at 10:41 AM, Paul Belanger 
wrote:

>  In fact, the openstack-infra team does mirror a
> lot of things today


I bumped into this the other day:

https://specs.openstack.org/openstack-infra/infra-specs/specs/unified_mirrors.html

... but so far haven't found any specific details on ci.openstack.org?
Obviously the . stuff obviously got implemented [1] - I'm
not sure what extent you have to "opt-in" to that in gate jobs?

Did someone did say AFS?

https://docs.openstack.org/infra/system-config/afs.html?highlight=mirror

-Clay

1. http://mirror.iad.rax.openstack.org/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment][snaps][ansible][puppet][charms] OpenStack snaps delivery strategy

2017-03-02 Thread Jesse Pretorius
Adding the [deployment] tag as the catch-all for deployment projects as I think 
they’d be interested.

From: Corey Bryant 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, March 2, 2017 at 3:31 PM
To: OpenStack Development Mailing List 
Subject: [openstack-dev] [snaps][ansible][puppet][charms] OpenStack snaps 
delivery strategy

Hi All,

I'm working to get a strategy in place for delivery of OpenStack snaps. Below 
I've outlined an initial strategy, and I'd like to get your input.

I'm particularly interested in input from snap folks of course, but also from 
projects that install OpenStack and may want to be involved in snap CI/gating 
(ie. Ansible, Puppet, Charms, etc).


First a quick background of snaps, tracks, channels, and versions (skip to 
"Strategy" if you're already familiar with these concepts):

Snaps
-
If you're not familiar with OpenStack snaps, see James Pages' intro at: 
http://lists.openstack.org/pipermail/openstack-dev/2017-January/109743.html
And if you'd like to just tip your toes in the water quickly, you can give the 
openstackclients snap a try: 
https://javacruft.wordpress.com/2017/02/03/snap-install-openstackclients/

Tracks
-
Snaps have the concept of tracks, which allow for publishing different series 
of software (ie. newton, ocata, pike would be separate tracks). In each track, 
you can publish a snap to any of the 4 channels based on how stable it is.

Channels
-
Snaps can be published to 4 different channels:
* edge:  for your most recent changes, probably untested
* beta:  used to provide preview releases of tested changes
* candidate:  used to vet uploads that should require no further code changes 
before moving to stable
* stable:  what most users will consume, stable and tested

Version
--
Snaps include metadata where the software version can be specified.

For more details on tracks, channels, and versions, see: 
https://snapcraft.io/docs/reference/channels


Strategy
---
Ok on to the strategy. Below I've outlined a proposed strategy for delivering 
snaps to the 4 different channels based on the level of testing they've 
undergone. Long-term we'd like to see the majority of this process automated, 
therefore the strategy I describe here is the end goal (and hence my overuse of 
"auto-").

Track Strategy

pike:  auto-publish to pike channels according to channel strategy
ocata -> latest [0]:  auto-publish to latest (ocata) channels according to 
channel strategy
newton: auto-publish to newton channels according to channel strategy
mitaka: auto-publish to mitaka channels according to channel strategy
[0] The 'latest' track is the default for users when they install a snap. 
Therefore the 'latest' track will always include the latest stable release.

New repo in support of channel strategy
-
snap-releases:
In order to enable channel testing, and publishing, of a known good set of 
snaps across OpenStack projects, I'd like to create a new 'snap-releases' repo. 
This would be a simple repo of yaml mappings, similar to [1], that would 
contain the current tracks/channels/versions for candidate and stable channels. 
For example, the snap-releases/ocata/cinder file may have 'candidate: 9.1.1' 
and 'stable: 9.0.0'.
[1] https://github.com/openstack/releases/tree/master/deliverables

Channel Strategy
-
edge: each snap is auto-published to edge on every upstream commit
1) edge stage is triggered by each new upstream commit
2) version field is auto-populated with short git hash or pbr version
3) auto-publish snap to edge channel
notes:
* unit tests will have already passed on upstream gate by this time and prior 
to all future stages (and snaps don't apply any new patches)
* voting projects may want to vote more often than beta releases but voting on 
every edge update seems overboard; they could also vote on individual snap repo 
changes but they may be seldom.

beta: each snap is auto-published to beta on every upstream stable point 
release or development milestone
1) beta stage is triggered by new upstream release tar, e.g. cinder newton 
watches for 9.x.x
2) version field auto-populated with release version, e.g. for cinder, 
version=9.1.1
3) auto-open launchpad bug for SRU (stable release update)
4) auto-publish snap to beta channel
5) auto-propose snap-releases gerrit review to change candidate version; for 
example:
- candidate: 9.0.0, stable: 9.0.0
+ candidate: 9.1.1, stable: 9.0.0
6) voting projects smoke test, and vote, with this snap from beta, and all 
other snaps from candidate
7) SRU bug auto-updated with results of review
8) human interaction required if tests fail and may require fixed snap to be 
re-published

candidate: each snap is auto-published to candidate after successful testing 

Re: [openstack-dev] [infra] merging code from a github repo to an openstack existing repo: how ?

2017-03-02 Thread Paul Belanger
On Wed, Feb 22, 2017 at 10:31:34AM -0500, Thomas Morin wrote:
> Hi,
> 
> A bit of context to make my question clearer: openstack/networking-bagpipe
> relies on bagpipe-bgp which is not an openstack project although done by the
> same people, and we would see a significant benefit in moving the code from
> github to openstack. This post does not relate to licence/CLA questions (all
> clear AFAIK, licence is Apache, all contributions by people who are also
> Openstack contributors). It does not relate to code style or lib
> dependencies either (there are a few things to adapt, which we have
> identified and mostly covered already).
> 
> The target would be: have the content of github's bagpipe-bgp repo become a
> sub directory of the networking-bagpipe repo, and then tweak
> setup.cfg/tox.ini so that this subdirectory becomes packaged and tested.
> 
> The question is:  how to achieve that without squashing/losing all git
> history (and without pushing one gerrit change per existing commit in the
> current history) ?
> 
> Would the following work...?
> - in the github repo: prepare a 'move_to_openstack' branch  where all repo
> content is moved in a 'bagpipe_bgp' subdir
> - in networking-bagpipe repo:
>* create a 'welcome_bagpipe_bgp' branch
>* have a manual step where someone (infra team ?) adds the github repo as
> a remote and merges the remote 'move_to_openstack' branch into the
> 'welcome_bagpipe_bgp' local branch (without squashing).
>* in this 'welcome_bagpipe_bgp' branch do whatever is needed in terms of
> setup.cfg/requirements.txt/tox.ini ...
>* when everything is ready, merge the 'welcome_bagpipe_bgp' branch into
> master
> - (in the gitub repo: replace the content with an explanation message)
> 
> (If the above does not work, what other possibility ?)
> 
> -Thomas
> 
> [1]  https://github.com/Orange-OpenSource/bagpipe-bgp
> 
Just to close the loop, Yolanda was able to help with this request yesterday.
Feel free to ping #openstack-infra if you have futher issues.

-PB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle]Nominating Victor Morales as Tricircle Core

2017-03-02 Thread hejiawei
+1 for my vote


Best wish
Hejiawei





At 2017-03-01 10:38:00, "joehuang"  wrote:

Hi Team,

Victor Morales has made many review contributions[1] to Trircircle since the 
Ocata cycle, and he also created the python-tricircleclient sub-project[2]. I 
would like to nominate him to be Tricircle core reviewer. I really think his 
experience will help us substantially improve Trircircle.


 It's now time to vote :)

[1] http://stackalytics.com/report/contribution/tricircle/120
[2] https://git.openstack.org/cgit/openstack/python-tricircleclient/




Best Regards
Chaoyi Huang (joehuang)__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][infra] does someone cares about Jenkins? I stopped.

2017-03-02 Thread Paul Belanger
On Thu, Mar 02, 2017 at 10:13:07AM +0100, Marcin Juszkiewicz wrote:
> I am working on some improvements for Kolla. Part of that work is
> sending patches for review.
> 
> Once patch is set for review.openstack.org there is a set of Jenkins
> jobs started to make sure that patch does not break already working
> code. And this is good thing.
> 
> How it is done is not good ;(
> 
> 1. Kolla output is nightmare to debug.
> 
> There is --logs-dir option to provide separate logs for each image build
> but it is not used. IMHO it should be as digging through such logs is
> easier.
> 
> Several images feels like they try to do install again and again to pass
> through what distribution consider a bug - like "user XY already exists"
> bugs while Debian/Ubuntu are used as base distro. Which adds several
> error messages to check/ignore.
> 
> 
> 2. Some jobs fail because "sometimes (not always) the gate can't access
> some source on the internet".
> 
> I spent most of my career on building software. Having builds fail for
> such reasons was hardly acceptable when building could take hours. So we
> mirrored all sources and used own mirror(s) as fallback. On other
> systems I used 10-20GB w3cache to handle that for me (because there was
> no way to provide local mirror of used sources).
> 
> OpenStack infrastructure lacks any of it. Using "recheck" comment in
> review to restart Jenkins jobs is not a solution - how many times it has
> to fail to make sure that it is patch's fault not infrastructure one?
> 
Lets not use absolutes please. In fact, the openstack-infra team does mirror a
lot of things today[1] but it sounds like not everything you need for jobs.

For the most part, the openstack-infra team is pretty accommodation to mirroring
requests. I admit, sometimes we are not the fastest to implement them, but we
try.  At the PTG, the openstack-ansible team reminded me about setting up a
mirror for mariada, which I plan on working on today / tomorrow.
> 

> As a contributor I started to ignore Jenkins tests. Instead I do builds
> on several machines to check does everything works with my patches. If
> something does not then I update my patchset.
> 
Now, this is my personal opinion not openstack-infra, if you have to rely on
local testing over testing in the gate, I think there is some serious issues
with your project.

Testing is hard, I get that. Nobody wants to recheck their patch multiple times
to get a green check mark.  At the same time, there are very few people in
general that gravitate towards testing infrastructure (this is a general
comment, not specific to kolla).

As a suggestion, I would have a serious talk about stopping the addition of
additional features to your code base and maybe work towards improving your
testing coverage. Having a strong test base to me makes a world of difference
for your developers and moral.

I can tell you from personal experience, if a project (inside openstack or
outside) doesn't have good test coverage, and I am always fighting with flapping
tests, I find it difficult to keep contributing like you mentioned above.

[1] http://mirror.dfw.rax.openstack.org/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] new measures backlog scheduling

2017-03-02 Thread Julien Danjou
On Thu, Mar 02 2017, gordon chung wrote:

> On 02/03/17 10:07 AM, Julien Danjou wrote:
>> That also means we may be able to get rid of the scheduler process?
>
> i think we should probably keep it. the scheduler process of agent will 
> loop through each bucket and start dumping metrics to process on queue 
> and the processing processes will greedily just process them in parallel.
>
> if we let the processing workers do the partitioning, we'll have a lot 
> of extra calls and contentions like before. it's not much compared to 
> standard io time but it was still noticeable in previous benchmarks.

Makes sense.

-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] new measures backlog scheduling

2017-03-02 Thread gordon chung


On 02/03/17 10:07 AM, Julien Danjou wrote:
> That also means we may be able to get rid of the scheduler process?

i think we should probably keep it. the scheduler process of agent will 
loop through each bucket and start dumping metrics to process on queue 
and the processing processes will greedily just process them in parallel.

if we let the processing workers do the partitioning, we'll have a lot 
of extra calls and contentions like before. it's not much compared to 
standard io time but it was still noticeable in previous benchmarks.

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [snaps][ansible][puppet][charms] OpenStack snaps delivery strategy

2017-03-02 Thread Corey Bryant
Hi All,

I'm working to get a strategy in place for delivery of OpenStack snaps.
Below I've outlined an initial strategy, and I'd like to get your input.

I'm particularly interested in input from snap folks of course, but also
from projects that install OpenStack and may want to be involved in snap
CI/gating (ie. Ansible, Puppet, Charms, etc).


First a quick background of snaps, tracks, channels, and versions (skip to
"Strategy" if you're already familiar with these concepts):

Snaps
-
If you're not familiar with OpenStack snaps, see James Pages' intro at:
http://lists.openstack.org/pipermail/openstack-dev/2017-January/109743.html
And if you'd like to just tip your toes in the water quickly, you can give
the openstackclients snap a try:
https://javacruft.wordpress.com/2017/02/03/snap-install-openstackclients/

Tracks
-
Snaps have the concept of tracks, which allow for publishing different
series of software (ie. newton, ocata, pike would be separate tracks). In
each track, you can publish a snap to any of the 4 channels based on how
stable it is.

Channels
-
Snaps can be published to 4 different channels:
* edge:  for your most recent changes, probably untested
* beta:  used to provide preview releases of tested changes
* candidate:  used to vet uploads that should require no further code
changes before moving to stable
* stable:  what most users will consume, stable and tested

Version
--
Snaps include metadata where the software version can be specified.

For more details on tracks, channels, and versions, see:
https://snapcraft.io/docs/reference/channels


Strategy
---
Ok on to the strategy. Below I've outlined a proposed strategy for
delivering snaps to the 4 different channels based on the level of testing
they've undergone. Long-term we'd like to see the majority of this process
automated, therefore the strategy I describe here is the end goal (and
hence my overuse of "auto-").

Track Strategy

pike:  auto-publish to pike channels according to channel strategy
ocata -> latest [0]:  auto-publish to latest (ocata) channels according to
channel strategy
newton: auto-publish to newton channels according to channel strategy
mitaka: auto-publish to mitaka channels according to channel strategy
[0] The 'latest' track is the default for users when they install a snap.
Therefore the 'latest' track will always include the latest stable release.

New repo in support of channel strategy
-
snap-releases:
In order to enable channel testing, and publishing, of a known good set of
snaps across OpenStack projects, I'd like to create a new 'snap-releases'
repo. This would be a simple repo of yaml mappings, similar to [1], that
would contain the current tracks/channels/versions for candidate and stable
channels. For example, the snap-releases/ocata/cinder file may have
'candidate: 9.1.1' and 'stable: 9.0.0'.
[1] https://github.com/openstack/releases/tree/master/deliverables

Channel Strategy
-
edge: each snap is auto-published to edge on every upstream commit
1) edge stage is triggered by each new upstream commit
2) version field is auto-populated with short git hash or pbr version
3) auto-publish snap to edge channel
notes:
* unit tests will have already passed on upstream gate by this time and
prior to all future stages (and snaps don't apply any new patches)
* voting projects may want to vote more often than beta releases but voting
on every edge update seems overboard; they could also vote on individual
snap repo changes but they may be seldom.

beta: each snap is auto-published to beta on every upstream stable point
release or development milestone
1) beta stage is triggered by new upstream release tar, e.g. cinder newton
watches for 9.x.x
2) version field auto-populated with release version, e.g. for cinder,
version=9.1.1
3) auto-open launchpad bug for SRU (stable release update)
4) auto-publish snap to beta channel
5) auto-propose snap-releases gerrit review to change candidate version;
for example:
- candidate: 9.0.0, stable: 9.0.0
+ candidate: 9.1.1, stable: 9.0.0
6) voting projects smoke test, and vote, with this snap from beta, and all
other snaps from candidate
7) SRU bug auto-updated with results of review
8) human interaction required if tests fail and may require fixed snap to
be re-published

candidate: each snap is auto-published to candidate after successful
testing of beta
1) candidate stage is triggered when snap-releases gerrit review for
candidate is merged
2) auto-publish snap to candidate channel
3) SRU bug tagged with verification-needed
4) auto-propose snap-releases gerrit review to change stable version; for
example:
- candidate: 9.1.1, stable: 9.0.0
+ candidate: 9.1.1, stable: 9.1.1
5) voting projects run, and vote, with this snap from candidate, and all
other snaps from stable
6) SRU bug auto-tagged verification-done or verfication-failed based on

Re: [openstack-dev] Our New Weekly(ish) Test Status Report

2017-03-02 Thread Matthew Treinish
On Tue, Feb 28, 2017 at 11:49:53AM -0500, Matthew Treinish wrote:
> Hello,
> 
> We have a few particularly annoying bugs that have been impacting the
> reliability of gate testing recently. It would be great if we could get
> volunteers to look at these bugs to improve the reliability of our testing as 
> we
> start working on Pike.
> 
> These two issues have been identified by elastic-recheck as being our biggest
> problems:
> 
> 1. SSH Banner bug http://status.openstack.org/elastic-recheck/#1349617
> 
> This bug is a longstanding issue that comes and goes and also has lots of very
> similar (but subtly different) failure modes. Tempest attempts to ssh into the
> cirros guest and is unable to after 18 attempts over the 300 sec timeout 
> window
> and fails to login. Paramiko reports that there was an issue reading the 
> banner
> returned on port 22 from the guest. This indicates that something is likely
> responding on port 22. We're working on trying to get more details on what is
> the cause here with:
> 
> https://review.openstack.org/437128

We've been doing some more debugging on this issue and made some progress
getting to the bottom of the bug. Jens Rosenboom figured out that the banner
errors are actually being caused by tempest leaking ssh connections (via
paramiko) on auth failures. Dropbear is set to only allow 5 unauthorized
connections per ip address whcih tempest would trip after 5 failed login
attempts. [1] Dropbear would just close the socket after this for login attempt
6 which would cause the banner error. We addressed this in tempest with:

https://review.openstack.org/439638 

since that has merged we haven't seen the banner failure signature anymore, but
it still hasn't solved our ssh connectivity issues. Temepest still isn't able to
login to the guest and fails with an auth error. Kevin Benton has been looking
into this with:

https://bugs.launchpad.net/nova/+bug/1668958

and we're tracking the actual failure signature now: (which only appears after
the tempest fix merged)

http://status.openstack.org/elastic-recheck/gate.html#1668958


The work here is ongoing, but we made enough progress to change the elastic
recheck signature so I figured an update was warranted.

Thanks,

Matt Treinish

[1] https://bugs.launchpad.net/nova/+bug/1668958/comments/4




> 
> 2. Libvirt crashes: http://status.openstack.org/elastic-recheck/#1643911 and
> http://status.openstack.org/elastic-recheck/#1646779
> 
> Libvirt is randomly crashing during the job which causes things to fail (for
> obvious reasons). To address this will likely require someone with experience
> debugging libvirt since it's most likely a bug isolated to libvirt. Tonyb has
> offered to start working on this so talk to him to coordinate efforts around
> fixing this.
> 
> The other thing to note is the oom-killer bug:
> http://status.openstack.org/elastic-recheck/gate.html#1656386 while there 
> aren't
> a lot of hits in logstash for this particular bug, it does raise an import 
> issue
> about the increased memory pressure on the test nodes. It's likely that a lot 
> of
> the instability may be related to the increased load on the nodes. As a 
> starting
> point all projects should look at their memory footprint and see where they 
> can
> trim things to try and make the situation better.
> 
> As a friendly reminder we do track bug rate incidence within our testing using
> the elastic-recheck tool. You can find that data at
> http://status.openstack.org/elastic-recheck. It can be quite useful to start
> there when determining which bugs to fix based on impact. Elastic recheck also
> maintains a list of failures that occurred without a known signature:
> http://status.openstack.org/elastic-recheck/data/integrated_gate.html
> 
> We also need some people to help maintain the list of existing queries, we 
> have
> a lot of queries for closed bugs that have no hits and others which are overly
> broad and matching failures which are unrelated to the bug. This would also be
> good task for a new person to start getting involved with. Feel free to submit
> patches to:
> https://git.openstack.org/cgit/openstack-infra/elastic-recheck/tree/queries to
> track new issues.
> 
> Thank you,
> 
> mtreinish and clarkb


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Ironic] One-time boot on OneView Drivers

2017-03-02 Thread fellypefca
Hi Ironicers. 

Recently we have added one-time boot feature to OneView drivers. However, now 
for be able to activate the onetime-boot feature , that make oneview driver's 
to save some time during the deployment , we need to find a common place to 
turn the onetime-boot On and not-persistent. 

The PXEBoot interface uses the method try_set_boot_device [1] from deploy_utils 
without using the parameter "persistent" [2][3] , the try_set_boot_device has 
that parameter set as True by default. 
The problem is, the method node_set_boot_device [4] in the Conductor has 
persistent set as False by default. 

We use PXEBoot as our boot option, we have implemented the on e time - boot 
option to allow us to skip Server Profile applications when PXE is needed [5] . 
However, PXEBoot uses try_set_boot_device that has persistent True as default, 
so onetime - boot is always disabled to PXE boot_device in PXEBoot. 
Since we use PXEBoot, we rely in how it changes the boot_device to PXE. 

We have proposed a change [6] as a initial solution to be evolved. 

What would you suggest? 

Thank you. 

[1] 
https://github.com/openstack/ironic/blob/0cacd14c6e574a6acb7c6716b09b0779a35e71c9/ironic/drivers/modules/deploy_utils.py#L672
 
[2] 
https://github.com/openstack/ironic/blob/0cacd14c6e574a6acb7c6716b09b0779a35e71c9/ironic/drivers/modules/pxe.py#L424
 
[3] 
https://github.com/openstack/ironic/blob/0cacd14c6e574a6acb7c6716b09b0779a35e71c9/ironic/drivers/modules/pxe.py#L515
 
[4] 
https://github.com/openstack/ironic/blob/2427c7b59463e53b8f0257b604371587ecc47966/ironic/conductor/utils.py#L43
 
[5] 
https://github.com/openstack/ironic/blob/fec55f4a113591a6ff725f66d8ec36b89a033e61/ironic/drivers/modules/oneview/management.py#L101
 
[6] https://review.openstack.org/#/c/436469/ 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle]Nominating Victor Morales as Tricircle Core

2017-03-02 Thread Morales, Victor
Thanks everyone for voting, I started recently and I’m sure that I’ve many 
things to learn but something that I discover since the beginning was the 
enthusiasm and welcoming aptitudes of this community.  I’m going to do my best 
for keeping high standards of the code

Thanks
Victor Morales
irc: electrocucaracha

From: joehuang >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Thursday, March 2, 2017 at 1:25 AM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [tricircle]Nominating Victor Morales as Tricircle 
Core

Thank you all for your voting.

Victor, you are now included in the core team.

Best Regards
Chaoyi Huang (joehuang)



From: Yipei Niu [newy...@gmail.com]
Sent: 01 March 2017 15:06
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tricircle]Nominating Victor Morales as Tricircle 
Core

+1.


From: joehuang
Sent: 01 March 2017 11:44
To: OpenStack Development Mailing List (not for usage questions)
Subject: RE: [openstack-dev] [tricircle]Nominating Victor Morales as Tricircle 
Core

+1 from my vote.

Best Regards
Chaoyi Huang (joehuang)

From: Vega Cai [luckyveg...@gmail.com]
Sent: 01 March 2017 11:42
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tricircle]Nominating Victor Morales as Tricircle 
Core

+1

Zhiyuan

On Wed, 1 Mar 2017 at 11:41 Devale, Sindhu 
> wrote:
+1

Sindhu

From: joehuang >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, February 28, 2017 at 8:38 PM
To: openstack-dev 
>
Subject: [openstack-dev] [tricircle]Nominating Victor Morales as Tricircle Core

Hi Team,

Victor Morales has made many review contributions[1] to Trircircle since the 
Ocata cycle, and he also created the python-tricircleclient sub-project[2]. I 
would like to nominate him to be Tricircle core reviewer. I really think his 
experience will help us substantially improve Trircircle.

 It's now time to vote :)

[1] http://stackalytics.com/report/contribution/tricircle/120
[2] https://git.openstack.org/cgit/openstack/python-tricircleclient/


Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
BR
Zhiyuan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] new measures backlog scheduling

2017-03-02 Thread Julien Danjou
On Thu, Mar 02 2017, gordon chung wrote:

> one of the issues we can't effectively partition the single bucket is 
> partly because we have multiple agents on a single bucket. in theory we 
> can use markers to partition the single bucket but because of multiple 
> workers, the marker has a very high chance of disappearing.
>
> in this case, only one agents is ever working on a bucket so it should 
> minimise the chance of a marker disappearing and therefore let us go 
> deeper into bucket.

Makes sense. So a consistent hashring of a few thousands of buckets
would solve that easily, as only one metricd will be assigned to a (lot
of) bucket(s).

That also means we may be able to get rid of the scheduler process?

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] new measures backlog scheduling

2017-03-02 Thread gordon chung


On 02/03/17 09:52 AM, Julien Danjou wrote:
> Sounds good. What's interesting is how you implement a shard/bucket in
> each driver. I imagine it's a container/bucket/directory.

yeah, same as whatever is used now... except more of them :)

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] new measures backlog scheduling

2017-03-02 Thread gordon chung


On 02/03/17 09:52 AM, Julien Danjou wrote:
>> using hashring idea, the buckets will be distributed among all the
>> > active metricd agents. the metricd agents will loop through all the
>> > assigned buckets based on processing interval. the actual processing of
>> > each bucket will be similar to what we have now: grab metrics, queue it
>> > for processing workers. the only difference is instead of just grabbing
>> > first x metrics and stopping, we keep grabbing until bucket is 'clear'.
>> > this will help us avoid the current issue where some metrics are never
>> > scheduled because the return order puts it at the end.
> It does not change that much the current issue IIUC. The only difference
> is that now we have 1 bucket and N metricd trying to empty it, whereas
> now we would have M buckets with N metricd trying spread so there's M/N
> metricd per bucket trying to empty each bucket. :)
>
> At some scale (larger than currently) it will improve things but it does
> not seem to be a drastic change.
>
> (I am also not saying that I have a better solution :)
>

one of the issues we can't effectively partition the single bucket is 
partly because we have multiple agents on a single bucket. in theory we 
can use markers to partition the single bucket but because of multiple 
workers, the marker has a very high chance of disappearing.

in this case, only one agents is ever working on a bucket so it should 
minimise the chance of a marker disappearing and therefore let us go 
deeper into bucket.

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] new measures backlog scheduling

2017-03-02 Thread Julien Danjou
On Thu, Mar 02 2017, gordon chung wrote:

Hi gordon,

> i was thinking more about this yesterday. i've an idea.

You should have seen my face when I read that! ;-P

> how we store new measures
> -
>
> when we add new measures to be processed, the metric itself is already 
> created in indexer, so it already has id. with the id, we can compute 
> and store a shard/bucket location in the indexer with the metric. since 
> metric id is an uuid, we can just mod it with number of buckets and it 
> should give us decent distribution. so with that, when we actually store 
> the new measure, we will look at the bucket location associated with the 
> metric.

Sounds good. What's interesting is how you implement a shard/bucket in
each driver. I imagine it's a container/bucket/directory.

> using hashring idea, the buckets will be distributed among all the 
> active metricd agents. the metricd agents will loop through all the 
> assigned buckets based on processing interval. the actual processing of 
> each bucket will be similar to what we have now: grab metrics, queue it 
> for processing workers. the only difference is instead of just grabbing 
> first x metrics and stopping, we keep grabbing until bucket is 'clear'. 
> this will help us avoid the current issue where some metrics are never 
> scheduled because the return order puts it at the end.

It does not change that much the current issue IIUC. The only difference
is that now we have 1 bucket and N metricd trying to empty it, whereas
now we would have M buckets with N metricd trying spread so there's M/N
metricd per bucket trying to empty each bucket. :)

At some scale (larger than currently) it will improve things but it does
not seem to be a drastic change.

(I am also not saying that I have a better solution :)

> we'll have a new agent (name here). this will walk through each metric 
> in our indexer, recompute a new bucket location, and set it. this will 
> make all new incoming points be pushed to new location. this agent will 
> also go to old location (if different) and process any unprocessed 
> measures of the metric. it will then move on to next metric until complete.
>
> there will probably need to be a state/config table or something so 
> indexer knows bucket size.
>
> i also think there might be a better partitioning technique to minimise 
> the number of metrics that change buckets... need to think about that more.

Yes, it's called consistent hashing, and that's what Swift and the like
are using.

Basically the idea is to create A LOT of buckets (higher than
your maximum number of potential metricd), let's say, 2^16, and then
distribute those containers across your metricds, e.g. if you have 10
metrics they will each be responsible for 6 554 buckets, when a 11th
metricd comes up, you just have to recompute whose responsible for which
bucket. This is exactly what tooz new partitioner system provide and
that we can leverage easily:

  https://github.com/openstack/tooz/blob/master/tooz/partitioner.py#L25

All we have to do is create a lot of buckets and ask tooz which buckets
belongs to each metricd. And then make them poll over and over again
(sigh) to empty them.

This make sure you DON'T have to rebalance your buckets like you
proposed earlier, which is costly, long and painful.

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][release][ptl] Adding docs to the release schedule

2017-03-02 Thread Ian Cordasco
-Original Message-
From: Telles Nobrega 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: March 2, 2017 at 08:01:29
To: OpenStack Development Mailing List (not for usage questions)

Cc: openstack-d...@lists.openstack.org 
Subject:  Re: [openstack-dev] [docs][release][ptl] Adding docs to the
release schedule

> I really believe that this idea will makes us work harder on keeping our
> docs in place and will make it for a better documented producted by release
> date.
> As shared before, I do believe that this is isn't easy and will demand a
> lot of effort from some teams, specially smaller teams with too much to do,
> but we from Sahara are on board with this approach and will try our best to
> do so.

Most things worth doing are difficult. =) This seems to be one of
them. If deliverable teams really work together, they may end up in a
situation like Glance did this cycle where we kind of just sat on our
hands after RC-1 was tagged. That time *could* have been better spent
reviewing all of our documentation.

I know projects go sideways all the time. I think this was Glance's
first cycle like this in a few cycles. But if we can make a habit of
creating excellent release candidates, we can spend the intermediate
time on documentation. I think that's a good compromise.

--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][swg] per-project "Business only" moderated mailing lists

2017-03-02 Thread Ian Cordasco
-Original Message-
From: Chris Dent 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: March 2, 2017 at 06:50:11
To: OpenStack Development Mailing List (not for usage questions)

Subject:  Re: [openstack-dev] [all][swg] per-project "Business only"
moderated mailing lists

> On Mon, 27 Feb 2017, Clint Byrum wrote:
>
> > So, I'll ask more generally: do you believe that the single openstack-dev
> > mailing list is working fine and we should change nothing? If not, what
> > problems has it created for you?
>
> No, it is not working fine, but that may be normal and the best we
> can do.

Specifically, it seems to be *broken* for our Release Team that needs
to communicate in an efficient way with PTLs and Release CPLs.

I understand the purpose of these business-only lists would also allow
for mascots to be sent to that, but I liked the ability to participate
in the discussions of other mascots. Granted, I didn't do it, but I
was also happy to read their conversations.

I don't think there's enough "noise" traffic to justify individual
lists. Would it perhaps be better to have a mailing list that the
release team can use to reach PTLs and Release CPLs with
notifications? Would that satisfy it? At that point, it would be up to
each PTL to subscribe, etc., but it also prevents the Release team
from having to generate N emails where N is the number of
"business-only" lists.

--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][telemetry] gate breakage

2017-03-02 Thread Matt Riedemann

On 3/2/2017 8:14 AM, Mehdi Abaakouk wrote:

Example of failure:

Our test assertion (that does a GET /v2.1/servers/detail HTTP/1.1) that
returns [] instead of the list of instances

http://logs.openstack.org/56/439156/2/check/gate-telemetry-dsvm-integration-gnocchi-ubuntu-xenial/d4a6c69/console.html#_2017-03-02_09_28_59_334619


The 'openstack server list' we issue for debugging that have the same
issue:

http://logs.openstack.org/56/439156/2/check/gate-telemetry-dsvm-integration-gnocchi-ubuntu-xenial/d4a6c69/console.html#_2017-03-02_09_29_13_803796


On Thu, Mar 02, 2017 at 02:52:20PM +0100, Mehdi Abaakouk wrote:

Hello,

We are experiencing an blocking issue with our integrated job since some
days.

Basically the job creates a heat stack and call nova API to list
instances and see if the stack have upscaled.

The autoscaling itself work well, but our test assertion fails because
listing nova instances doesn't works anymore. It always returns an empty
list.

I have first though https://review.openstack.org/#/c/427392/ will fix
it, because nova-api was logging some errors about cell initialisations.

But now theses errors are gone, and 'openstack server list' still
returns an empty list, while 'openstack server show X' works well.

Any ideas are welcome.

Cheers,
--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht




According to logstash it looks like the regression started around 2/28 
so we should look for nova changes that might be related.


--

Thanks,

Matt Riedemann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] new measures backlog scheduling

2017-03-02 Thread gordon chung


On 15/11/16 04:53 AM, Julien Danjou wrote:
> Yeah in the case of the Swift driver for Gnocchi, I'm not really sure
> how much buckets we should create. Should we make the user pick a random
> number like the number of partition in Swift and then create the
> containers in Swift? Or can we have something simpler? (I like automagic
> things). WDYT Gordon?


i was thinking more about this yesterday. i've an idea.


how we store new measures
-

when we add new measures to be processed, the metric itself is already 
created in indexer, so it already has id. with the id, we can compute 
and store a shard/bucket location in the indexer with the metric. since 
metric id is an uuid, we can just mod it with number of buckets and it 
should give us decent distribution. so with that, when we actually store 
the new measure, we will look at the bucket location associated with the 
metric.


how we process measures
---

using hashring idea, the buckets will be distributed among all the 
active metricd agents. the metricd agents will loop through all the 
assigned buckets based on processing interval. the actual processing of 
each bucket will be similar to what we have now: grab metrics, queue it 
for processing workers. the only difference is instead of just grabbing 
first x metrics and stopping, we keep grabbing until bucket is 'clear'. 
this will help us avoid the current issue where some metrics are never 
scheduled because the return order puts it at the end.


how we change bucket size
-

we'll have a new agent (name here). this will walk through each metric 
in our indexer, recompute a new bucket location, and set it. this will 
make all new incoming points be pushed to new location. this agent will 
also go to old location (if different) and process any unprocessed 
measures of the metric. it will then move on to next metric until complete.

there will probably need to be a state/config table or something so 
indexer knows bucket size.

i also think there might be a better partitioning technique to minimise 
the number of metrics that change buckets... need to think about that more.


what we set default bucket size to
--

32? say we aim for default 10K metrics, that puts ~310 metrics (and its 
measure objects from POST) in each bucket... or 64?


cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] [docs][release][ptl] Adding docs to the release schedule

2017-03-02 Thread Alexandra Settle


From: Anne Gentle 
Date: Thursday, March 2, 2017 at 2:16 PM
To: Alexandra Settle 
Cc: "OpenStack Development Mailing List (not for usage questions)" 
, "openstack-d...@lists.openstack.org" 

Subject: Re: [OpenStack-docs] [docs][release][ptl] Adding docs to the release 
schedule



On Wed, Mar 1, 2017 at 11:52 AM, Alexandra Settle 
> wrote:
Hi everyone,

I would like to propose that we introduce a “Review documentation” period on 
the release schedule.

We would formulate it as a deadline, so that it fits in the schedule and making 
it coincide with the RC1 deadline.

For projects that are not following the milestones, we would translate this new 
inclusion literally, so if you would like your project to be documented at 
docs.o.o, then doc must be introduced and reviewed one month before the branch 
is cut.

I like this idea, and it can align certain docs with string freeze logically.

I think the docs that are governed with this set of rules should be scoped only 
to those that are synched with a release, namely the Configuration Reference, 
Networking Guide, and Install Guides. [1]

For reference, those are the guides that would best align with "common cycle 
with development milestones." [2]

Scope this proposal to the released guides, clarify which repo those will be 
in, who can review and merge, and precisely when the cutoff is, and you're onto 
something here. Plus, I can hear the translation teams cheering. :)


I completely agree with everything here :) my only question is, what do you 
mean by “clarify which repo those will be in”? I had no intention of moving 
documentation with this suggestion Install guides either in openstack-manuals 
or their own $project repos :)

Next question – since there doesn’t appear to be a huge ‘no don’t do the thing’ 
coming from the dev list at this point, how and where do we include this new 
release information? Here? 
https://docs.openstack.org/project-team-guide/release-management.html#release-1

Anne


1. 
https://docs.openstack.org/contributor-guide/blueprints-and-specs.html#release-specific-documentation

2. 
https://docs.openstack.org/project-team-guide/release-management.html#common-cycle-with-development-milestones


In the last week since we released Ocata, it has become increasingly apparent 
that the documentation was not updated from the development side. We were not 
aware of a lot of new enhancements, features, or major bug fixes for certain 
projects. This means we have released with incorrect/out-of-date documentation. 
This is not only an unfortunately bad reflection on our team, but on the 
project teams themselves.

The new inclusion to the schedule may seem unnecessary, but a lot of people 
rely on this and the PTL drives milestones from this schedule.

From our side, I endeavor to ensure our release managers are working harder to 
ping and remind doc liaisons and PTLs to ensure the documentation is 
appropriately updated and working to ensure this does not happen in the future.

Thanks,

Alex


___
OpenStack-docs mailing list
openstack-d...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs



--

Read my blog: justwrite.click
Subscribe to Docs|Code: docslikecode.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-docs] [docs][release][ptl] Adding docs to the release schedule

2017-03-02 Thread Anne Gentle
On Wed, Mar 1, 2017 at 11:52 AM, Alexandra Settle 
wrote:

> Hi everyone,
>
>
>
> I would like to propose that we introduce a “Review documentation” period
> on the release schedule.
>
>
>
> We would formulate it as a deadline, so that it fits in the schedule and
> making it coincide with the RC1 deadline.
>
>
>
> For projects that are not following the milestones, we would translate
> this new inclusion literally, so if you would like your project to be
> documented at docs.o.o, then doc must be introduced and reviewed one month
> before the branch is cut.
>

I like this idea, and it can align certain docs with string freeze
logically.

I think the docs that are governed with this set of rules should be scoped
only to those that are synched with a release, namely the Configuration
Reference, Networking Guide, and Install Guides. [1]

For reference, those are the guides that would best align with "common
cycle with development milestones." [2]

Scope this proposal to the released guides, clarify which repo those will
be in, who can review and merge, and precisely when the cutoff is, and
you're onto something here. Plus, I can hear the translation teams
cheering. :)

Anne


1.
https://docs.openstack.org/contributor-guide/blueprints-and-specs.html#release-specific-documentation

2.
https://docs.openstack.org/project-team-guide/release-management.html#common-cycle-with-development-milestones


>
> In the last week since we released Ocata, it has become increasingly
> apparent that the documentation was not updated from the development side.
> We were not aware of a lot of new enhancements, features, or major bug
> fixes for certain projects. This means we have released with
> incorrect/out-of-date documentation. This is not only an unfortunately bad
> reflection on our team, but on the project teams themselves.
>
>
>
> The new inclusion to the schedule may seem unnecessary, but a lot of
> people rely on this and the PTL drives milestones from this schedule.
>
>
>
> From our side, I endeavor to ensure our release managers are working
> harder to ping and remind doc liaisons and PTLs to ensure the documentation
> is appropriately updated and working to ensure this does not happen in the
> future.
>
>
>
> Thanks,
>
>
>
> Alex
>
>
>
> ___
> OpenStack-docs mailing list
> openstack-d...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-docs
>
>


-- 

Read my blog: justwrite.click 
Subscribe to Docs|Code: docslikecode.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][telemetry] gate breakage

2017-03-02 Thread Mehdi Abaakouk

Example of failure:

Our test assertion (that does a GET /v2.1/servers/detail HTTP/1.1) that
returns [] instead of the list of instances

http://logs.openstack.org/56/439156/2/check/gate-telemetry-dsvm-integration-gnocchi-ubuntu-xenial/d4a6c69/console.html#_2017-03-02_09_28_59_334619

The 'openstack server list' we issue for debugging that have the same
issue:

http://logs.openstack.org/56/439156/2/check/gate-telemetry-dsvm-integration-gnocchi-ubuntu-xenial/d4a6c69/console.html#_2017-03-02_09_29_13_803796

On Thu, Mar 02, 2017 at 02:52:20PM +0100, Mehdi Abaakouk wrote:

Hello,

We are experiencing an blocking issue with our integrated job since some
days.

Basically the job creates a heat stack and call nova API to list
instances and see if the stack have upscaled.

The autoscaling itself work well, but our test assertion fails because
listing nova instances doesn't works anymore. It always returns an empty
list.

I have first though https://review.openstack.org/#/c/427392/ will fix
it, because nova-api was logging some errors about cell initialisations.

But now theses errors are gone, and 'openstack server list' still
returns an empty list, while 'openstack server show X' works well.

Any ideas are welcome.

Cheers,
--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][release][ptl] Adding docs to the release schedule

2017-03-02 Thread Telles Nobrega
I really believe that this idea will makes us work harder on keeping our
docs in place and will make it for a better documented producted by release
date.
As shared before, I do believe that this is isn't easy and will demand a
lot of effort from some teams, specially smaller teams with too much to do,
but we from Sahara are on board with this approach and will try our best to
do so.

Thanks,

On Thu, Mar 2, 2017 at 7:33 AM Alexandra Settle 
wrote:

>
>
>
>
> *From: *John Dickinson 
> *Reply-To: *"OpenStack Development Mailing List (not for usage
> questions)" 
> *Date: *Wednesday, March 1, 2017 at 11:50 PM
> *To: *OpenStack Development Mailing List <
> openstack-dev@lists.openstack.org>
> *Cc: *"openstack-d...@lists.openstack.org" <
> openstack-d...@lists.openstack.org>
> *Subject: *Re: [openstack-dev] [docs][release][ptl] Adding docs to the
> release schedule
>
>
>
> On 1 Mar 2017, at 10:07, Alexandra Settle wrote:
>
> On 3/1/17, 5:58 PM, "John Dickinson"  wrote:
>
>
>
> On 1 Mar 2017, at 9:52, Alexandra Settle wrote:
>
> > Hi everyone,
> >
> > I would like to propose that we introduce a “Review documentation”
> period on the release schedule.
> >
> > We would formulate it as a deadline, so that it fits in the schedule and
> making it coincide with the RC1 deadline.
> >
> > For projects that are not following the milestones, we would translate
> this new inclusion literally, so if you would like your project to be
> documented at docs.o.o, then doc must be introduced and reviewed one month
> before the branch is cut.
>
> Which docs are these? There are several different sets of docs that are
> hosted on docs.o.o that are managed within a project repo. Are you saying
> those won't get pushed to
> docs.o.o if they are patched within a month of the cycle release?
>
> The only sets of docs that are published on the docs.o.o site that are
> managed in project-specific repos is the project-specific installation
> guides. That management is entirely up to the team themselves, but I would
> like to push for the integration of a “documentation review” period to
> ensure that those teams are reviewing their docs in their own tree.
>
> This is a preferential suggestion, not a demand. I cannot make you review
> your documentation at any given period.
>
> The ‘month before’ that I refer to would be for introduction of
> documentation and a review period. I will not stop any documentation being
> pushed to the repo unless, of course, it is untested and breaks the
> installation process.
>
> There's the dev docs, the install guide, and the api reference. Each of
> these are published at docs.o.o, and each have elements that need to be
> up-to-date with a release.
>
> >
> > In the last week since we released Ocata, it has become increasingly
> apparent that the documentation was not updated from the development side.
> We were not aware of a lot of new enhancements, features, or major bug
> fixes for certain projects. This means we have released with
> incorrect/out-of-date documentation. This is not only an unfortunately bad
> reflection on our team, but on the project teams themselves.
> >
> > The new inclusion to the schedule may seem unnecessary, but a lot of
> people rely on this and the PTL drives milestones from this schedule.
> >
> > From our side, I endeavor to ensure our release managers are working
> harder to ping and remind doc liaisons and PTLs to ensure the documentation
> is appropriately updated and working to ensure this does not happen in the
> future.
>
> Overall, I really like the general concept here. It's very important to
> have good docs. Good docs start with the patch, and we should be
> encouraging the idea of "patch must have both tests and docs before
> landing".
>
> I’m glad to hear you think so :) this is entirely my thought process.
>
> On a personal note, though, I think I'll find this pretty tough. First,
> it's really hard for me to define when docs are "done", so it's hard to
> know that the docs are "right" at the time of release. Second, docs are
> built and published at each commit, so updating the docs "later, in a
> follow-on patch" is a simple thing to hope for and gives fast feedback,
> even after a release. (Of course the challenge is actually *doing* the
> patch later--see my previous paragraph.)
>
> So, unfortunately, I can give you no promise this was ever intended to be
> an easy inclusion. But in fairness, this is something teams should have
> already been doing.
>
> However, as a PTL – you already have enough on your plate. We recommend a
> docs liaison that is not the PTL so that the individual is able to dedicate
> time to reviewing the documentation to the best of their ability. The docs
> being “done” = all new features that have a user impact are documented, and
> “right” = the user is able to install $project without major incident.
>
> However, to reiterate my point before – we 

[openstack-dev] [nova][telemetry] gate breakage

2017-03-02 Thread Mehdi Abaakouk

Hello,

We are experiencing an blocking issue with our integrated job since some
days.

Basically the job creates a heat stack and call nova API to list
instances and see if the stack have upscaled.

The autoscaling itself work well, but our test assertion fails because
listing nova instances doesn't works anymore. It always returns an empty
list.

I have first though https://review.openstack.org/#/c/427392/ will fix
it, because nova-api was logging some errors about cell initialisations.

But now theses errors are gone, and 'openstack server list' still
returns an empty list, while 'openstack server show X' works well.

Any ideas are welcome.

Cheers,
--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] broken installation at RHEL-based distros

2017-03-02 Thread Sean Dague
On 03/02/2017 08:18 AM, Evgeny Antyshev wrote:
> Hello, devstack!
> 
> I want to draw some attention to the fact, that install_libvirt function
> now (since https://review.openstack.org/#/c/438325 landed)
> only works for Centos 7, but not for other RHEL-based distributions:
> Virtuozzo and, probably, RHEV.
> 
> Both of above have own version for qemu-kvm package: qemu-kvm-vz and
> qemu-kvm-rhev,
> accordingly. These packages provide "qemu-kvm", like qemu-kvm-ev,
> and, when you call "yum install qemu-kvm", they replace the default OS
> package.
> 
> To solve this, I propose install by "qemu-kvm" name, like in the patch:
> https://review.openstack.org/440353

I think that seems fine, but would like Ian to confirm it won't hurt the
centos 7 work.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] upgrade connection_info when Ceph mon IP changed

2017-03-02 Thread Rajesh Tailor
>n Wed, 18 May 2016 14:30:00 -0500, Matt Riedemann wrote:
>>> While convenient as a workaround, I'm not in favor of the idea of adding
>>> something to the REST API so a user can force refresh the connection
>>> info - this is a bug and leaks information out of the API about how the
>>> cloud is configured. If you didn't have volumes attached to the instance
>>> at all then this wouldn't matter.
>>>
>>> I think in an earlier version of the patch it was reloading and checking
>>> the connection info every time the BDM list was retrieved for an
>>> instance, which was a major issue for normal operations where this isn't
>>> a problem.
>>>
>>> Since it's been scoped to just start/reboot operations, it's better, and
>>> there are comments in the patch to make it a bit more efficient also
>>> (avoid calling the DB multiple times for the same information).
>>>
>>> I'm not totally opposed to doing the refresh on start/reboot. We could
>>> make it configurable, so if you're using a storage server backend where
>>> the IP might change, then set this flag, but that's a bit clunky. And a
>>> periodic task wouldn't help us out.
>>>
>>> I'm open to other ideas if anyone has them.
>>
>>
>> I was thinking it may be possible to do something similar to how network
>> info is periodically refreshed in _heal_instance_info_cache [1]. The
>> task interval is configurable (defaults to 60 seconds) and works on a
>> queue of instances such that one is refreshed per period, to control the
>> load on the host. To avoid doing anything for storage backends that
>> can't change IP, maybe we could make the task return immediately after
>> calling a driver method that would indicate whether the storage backend
>> can be affected by an IP change.
>>
>> There would be some delay until the task runs on an affected instance,
>> though.
>>
>> -melanie
>>
>>
>> [1]
>>
https://github.com/openstack/nova/blob/9a05d38/nova/compute/manager.py#L5549
>>
>>
>>
__
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at
lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>I like this idea. Sure it's a delay, but it resolves the problem
>eventually and doesn't add the overhead to the start/reboot operations
>that should mostly be unnecessary if things are working.
>
>I like the short-circuit idea too, although that's a nice to have. A
>deployer can always disable the periodic task if they don't want that
>running.
>
>--
>
>Thanks,
>
>Matt Riedemann
>

Hi Matt,

I was thinking, if it could be done on restarting of nova-compute service.
Because if operator is going to change storage node IPs, they might need to
restart at least some services, so we can ask operator to restart
nova-compute service as well, if instances on that compute-node is going to
be affected by IP change.

To fix this issue, we can hard reboot the affected instances on restart of
nova-compute service, by doing so the updated connection info is getting
stored in BDM table as well as recorded in domain xml.

Please correct me if I am wrong.

Regards,
Rajesh
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] broken installation at RHEL-based distros

2017-03-02 Thread Evgeny Antyshev

Hello, devstack!

I want to draw some attention to the fact, that install_libvirt function
now (since https://review.openstack.org/#/c/438325 landed)
only works for Centos 7, but not for other RHEL-based distributions:
Virtuozzo and, probably, RHEV.

Both of above have own version for qemu-kvm package: qemu-kvm-vz and
qemu-kvm-rhev,
accordingly. These packages provide "qemu-kvm", like qemu-kvm-ev,
and, when you call "yum install qemu-kvm", they replace the default OS
package.

To solve this, I propose install by "qemu-kvm" name, like in the patch:
https://review.openstack.org/440353

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] Proposing Michal Gershenzon to the core team

2017-03-02 Thread Gershenzon, Michal (Nokia - IL)
Thank you all, I am happy to join the core team and take on the new 
responsibilities that comes with it :)


All the best,

Michal Gershenzon
Software Engineer, CloudBand
Application & Analytics , Nokia
Contact number: +972 9 793 3163

From: Renat Akhmerov [mailto:renat.akhme...@gmail.com]
Sent: Thursday, March 02, 2017 9:01 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [mistral] Proposing Michal Gershenzon to the core 
team

Thank you guys!

Michal, you’re now included to the core team.

Renat Akhmerov
@Nokia

On 2 Mar 2017, at 03:50, Lingxian Kong 
> wrote:

+1, she indeed has been doing great contribution to Mistral, welcome, Michal :-)


Cheers,
Lingxian Kong (Larry)

On Thu, Mar 2, 2017 at 5:47 AM, Renat Akhmerov 
> wrote:
Hi,

Based on the stats of Michal Gershenzon in Ocata cycle I’d like to promote her 
to the core team.
Michal works at Nokia CloudBand and being a CloudBand engineer she knows 
Mistral very well
as a user and behind the scenes helped find a lot of bugs and make countless 
number of
improvements, especially in performance.

Overall, she is a deep thinker, cares about details, always has an unusual 
angle of view on any
technical problem. She is one of a few people that I’m aware of who I could 
call a Mistral expert.
She also participates in almost every community meeting in IRC.

In Ocata she improved her statistics pretty significantly (e.g. ~60 reviews 
although the cycle was
very short) and is keeping up the good pace now. Also, Michal is officially 
planning to allocate
more time for upstream development in Pike

I believe Michal would be a great addition for the Mistral core team.

Please let me know if you agree with that.

Thanks

[1] 
http://stackalytics.com/?module=mistral-group=ocata_id=michal-gershenzon

Renat Akhmerov
@Nokia


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [nova] keystonauth catalog work arounds hiding transition issues

2017-03-02 Thread Chris Dent

On Mon, 27 Feb 2017, Sean Dague wrote:


However, when there is magic applied it means that stops being true. And
now folks think the APIs work like the magic works, not realizing it's
all client side magic, and when they try to do this in node next month,
it will all fall apart.


+many

It's good we have a plan (elsewhere in the thread) to get things
smooth again, but we should also see if we can articulate something
along the lines of "design goals" so that this kind of thing is
decreasingly common.

We've become relatively good at identifying when the problem exists:
If you find yourself justifying some cruft on side A for behavior on
side B we know that's a problem for other users of B. What we're less
good at is evolving B quickly enough such that A doesn't have to
compensate. There's likely no easy solution that also accounts for
compatibility.

--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] Making a deployment guide for kolla-kubernetes

2017-03-02 Thread Steven Dake (stdake)
Hey folks,

We decided in Wednesday’s 16:00 UTC Kolla meeting to start tackling some of the 
blocking items for 1.0.0 of kolla-kubernetes.  One of those items is a 
deployment guide.  The start of the deployment guide is here:
https://etherpad.openstack.org/p/kolla-kubernetes-deploy-guide-BP

We will be defining the deployment guide documentation using etherpad to ease 
collaboration.  Please follow the instructions.  If you have questions, ask on 
#openstack-kolla.

Regards
-steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][docs] Is anyone interested in being the docs liaison for Nova?

2017-03-02 Thread Nicolas Bock

Hi Matt,

I'd be interested as well. Maybe Zhenyu and I could split this 
role, although I am not a native English speaker either :)


Nick

On Wed, Mar 01, 2017 at 09:45:05AM -0600, Matt Riedemann wrote:

There is a need for a liaison from Nova for the docs team to help with
compute-specific docs in the install guide and various manuals.

For example, we documented placement and cells v2 in the nova devref in
Ocata but instructions on those aren't in the install guide, so the docs
team is adding that here [1].

I'm not entirely sure what the docs liaison role consists of, but I
assume it at least means attending docs meetings, helping to review docs
patches that are related to nova, helping to alert the docs team of big
changes coming in a release that will impact the install guide, etc.

From my point of view, I've historically pushed nova developers to be
documenting new features within the nova devref since it was "closer to
home" and could be tied to landing said feature in the nova tree, so
there was more oversight on the docs actually happening *somewhere*
rather than a promise to work them in the non-nova manuals, which a lot
of the time was lip service and didn't actually happen once the feature
was in. But there is still the need for the install guide as the first
step to deploying nova so we need to balance both things.

If no one else steps up for the docs liaison role, by default it lands
on me, so I'd appreciate any help here.

[1] https://review.openstack.org/#/c/438328/

--

Thanks,

Matt Riedemann



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment][TripleO][kolla][ansible][fuel] Next steps for cross project collaboration

2017-03-02 Thread Andy McCrae
On 2 March 2017 at 10:18, Steven Hardy  wrote:

> On Thu, Mar 02, 2017 at 05:06:34AM +, Brandon B. Jozsa wrote:
> >+1 for the monthly meetings and a long standing, cross-team,
> collaborative
> >etherpad.
>
> Yes the etherpad is a good idea, thanks!
>
> I guess we'll want one per-release, so I created one for Pike here:
>
> https://etherpad.openstack.org/p/deployment-pike


> Everyone please feel free to add relevant content/links there, thanks!
>

This is already being fleshed out, which is great to see.
Hopefully we can get some higher level focus points as a result.


> I also went ahead and created the WG wiki page:
>
> https://wiki.openstack.org/wiki/Deployment
>
> Which is linked from:
>
> https://wiki.openstack.org/wiki/Category:Working_Groups
>
> I referenced the agreed [deployment] openstack-dev tag, and the new IRC
> channel which Steve set up (thanks) #openstack-deployment
>
> Again, please feel free to edit if I missed anything.
>
> Please can anyone wanting to help with organizing (e.g chairing meetings if
> we have them, proactively seeking cross-project things to discuss, and
> helping with sessions when we meet f2f next time) please add your name and
> email to the Deployment Wiki page.
>

Done! Looking forward to the collaboration, and thanks for sorting out the
Wiki Steve.


>
> >It’s challenging for some projects who already collaborate heavily
> with
> >communities outside of OpenStack to take on additional heavy meeting
> >cycles.
>
> Yeah I think there's enough interest in semi-regular meetings that we may
> want to arrange them, but lets see what topics are added (to the etherpad
> above), then we can poll for a suitable day/time when there's enough
> content?
>
> I think in many cases ML discussion combined with IRC will be enough, but
> I'm also happy to arrange a regular monthly meeting if folks feel that will
> be worthwhile.
>
> >The PTG deployment cross-team collaboration was really awesome. I
> can’t
> >wait to see what this team is able to do together! Very happy to be a
> part
> >of this effort!
>
> Agreed, I'm really happy to see these first-steps to more effective
> collaboration, lets keep it going! :)
>
> Thanks,
>
> Steve
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Containers] Creating launchpad bugs and correct tags

2017-03-02 Thread Flavio Percoco

On 02/03/17 10:57 +, Dougal Matthews wrote:

On 2 March 2017 at 10:40, Flavio Percoco  wrote:


Greetings,

Just wanted to give a heads up that we're tagging all the containers
related
bugs with the... guess what?... containers tag. If you find an issue with
one of
the containers jobs or running tripleo on containers, please, file a bug
and tag
it accordingly.



It might be worth adding it to the bug-tagging policy.
http://specs.openstack.org/openstack/tripleo-specs/specs/policy/bug-tagging.html#tags


Will do, thanks!
Flavio


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Containers] Creating launchpad bugs and correct tags

2017-03-02 Thread Dougal Matthews
On 2 March 2017 at 10:40, Flavio Percoco  wrote:

> Greetings,
>
> Just wanted to give a heads up that we're tagging all the containers
> related
> bugs with the... guess what?... containers tag. If you find an issue with
> one of
> the containers jobs or running tripleo on containers, please, file a bug
> and tag
> it accordingly.
>

It might be worth adding it to the bug-tagging policy.
http://specs.openstack.org/openstack/tripleo-specs/specs/policy/bug-tagging.html#tags


>
> https://bugs.launchpad.net/tripleo/+bugs?field.tag=containers
>
> Flavio
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][Containers] Creating launchpad bugs and correct tags

2017-03-02 Thread Flavio Percoco

Greetings,

Just wanted to give a heads up that we're tagging all the containers related
bugs with the... guess what?... containers tag. If you find an issue with one of
the containers jobs or running tripleo on containers, please, file a bug and tag
it accordingly.

https://bugs.launchpad.net/tripleo/+bugs?field.tag=containers

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] state of the stable/mitaka branches

2017-03-02 Thread Dmitry Tantsur

On 03/01/2017 08:19 PM, Jay Faulkner wrote:



On Mar 1, 2017, at 11:15 AM, Pavlo Shchelokovskyy 
 wrote:

Greetings ironicers,

I'd like to discuss the state of the gates in ironic and other related projects 
for stable/mitaka branch.

Today while making some test patches to old branches I discovered the following 
problems:

python-ironicclient/stable/mitaka
All unit-test-like jobs are broken due to not handling upper constraints. 
Because of it a newer than actually supported python-openstackclient is 
installed, which already lacks some modules python-ironicclient tries to import 
(these were moved to osc-lib).
I've proposed a patch that copies current way of dealing with upper constraints 
in tox envs [0], gates are passing.

ironic/stable/mitaka
While not actually being gated on, using virtualbmc+ipmitool drivers is broken. 
The reason is again related to upper constraints as what happens is old enough 
version of pyghmi (from mitaka upper constraints) is installed with most recent 
virtualbmc (not in upper constraints), and those versions are incompatible.
This highlights a question whether we should propose virtualbmc to upper 
constraints too to avoid such problems in the future.
Meanwhile a quick fix would be to hard-code the supported virtualbmc version in 
the ironic's devstack plugin for mitaka release.
Although not strictly supported for Mitaka release, I'd like that functionality 
to be working on stable/mitaka gates to test for upcoming removal of *_ssh 
drivers.

I did not test other projects yet.



I can attest jobs are broken for stable/mitaka on ironic-lib as well — our jobs 
build docs unconditionally, and ironic-lib had no docs in Mitaka.


Oh, fun.

Well, the docs job is easy to exclude. But we seem to have virtualbmc-based jobs 
there. As I already wrote in another message, I'm not even sure they're supposed 
to work..




-
Jay Faulkner
OSIC


With all the above, the question is should we really fix the gates for the 
mitaka branch now? According to OpenStack release page [1] the Mitaka release 
will reach end-of-life on April 10, 2017.

[0] https://review.openstack.org/#/c/439742/
[1] https://releases.openstack.org/#release-series

Cheers,
Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] state of the stable/mitaka branches

2017-03-02 Thread Dmitry Tantsur

On 03/01/2017 08:15 PM, Pavlo Shchelokovskyy wrote:

Greetings ironicers,

I'd like to discuss the state of the gates in ironic and other related projects
for stable/mitaka branch.


Hi!

Thanks for raising this. I need to apologize, I haven't been doing great job as 
a stable liaison recently. I'll try to fix stuff in the coming days.




Today while making some test patches to old branches I discovered the following
problems:

python-ironicclient/stable/mitaka
All unit-test-like jobs are broken due to not handling upper constraints.
Because of it a newer than actually supported python-openstackclient is
installed, which already lacks some modules python-ironicclient tries to import
(these were moved to osc-lib).
I've proposed a patch that copies current way of dealing with upper constraints
in tox envs [0], gates are passing.


Thanks, I've just approved this patch.



ironic/stable/mitaka
While not actually being gated on, using virtualbmc+ipmitool drivers is broken.


This was not a supported combination back then, so I'm not sure it's good time 
to start right now.



The reason is again related to upper constraints as what happens is old enough
version of pyghmi (from mitaka upper constraints) is installed with most recent
virtualbmc (not in upper constraints), and those versions are incompatible.
This highlights a question whether we should propose virtualbmc to upper
constraints too to avoid such problems in the future.
Meanwhile a quick fix would be to hard-code the supported virtualbmc version in
the ironic's devstack plugin for mitaka release.
Although not strictly supported for Mitaka release, I'd like that functionality
to be working on stable/mitaka gates to test for upcoming removal of *_ssh 
drivers.


There were important changes of pyghmi that made virtualbmc possible at all. I 
don't think any versions can work with old pyghmi.


I'm not sure why removing of *_ssh drivers from master should necessary break 
stable/mitaka, where these drivers are present. Could you elaborate?




I did not test other projects yet.

With all the above, the question is should we really fix the gates for the
mitaka branch now? According to OpenStack release page [1] the Mitaka release
will reach end-of-life on April 10, 2017.


I'd prefer we fix them, I'll look into the problems raised in this thread.



[0] https://review.openstack.org/#/c/439742/
[1] https://releases.openstack.org/#release-series

Cheers,
Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][release][ptl] Adding docs to the release schedule

2017-03-02 Thread Alexandra Settle


From: John Dickinson 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, March 1, 2017 at 11:50 PM
To: OpenStack Development Mailing List 
Cc: "openstack-d...@lists.openstack.org" 
Subject: Re: [openstack-dev] [docs][release][ptl] Adding docs to the release 
schedule


On 1 Mar 2017, at 10:07, Alexandra Settle wrote:

On 3/1/17, 5:58 PM, "John Dickinson"  wrote:



On 1 Mar 2017, at 9:52, Alexandra Settle wrote:

> Hi everyone,
>
> I would like to propose that we introduce a “Review documentation” period on 
> the release schedule.
>
> We would formulate it as a deadline, so that it fits in the schedule and 
> making it coincide with the RC1 deadline.
>
> For projects that are not following the milestones, we would translate this 
> new inclusion literally, so if you would like your project to be documented 
> at docs.o.o, then doc must be introduced and reviewed one month before the 
> branch is cut.

Which docs are these? There are several different sets of docs that are hosted 
on docs.o.o that are managed within a project repo. Are you saying those won't 
get pushed to
docs.o.o if they are patched within a month of the cycle release?

The only sets of docs that are published on the docs.o.o site that are managed 
in project-specific repos is the project-specific installation guides. That 
management is entirely up to the team themselves, but I would like to push for 
the integration of a “documentation review” period to ensure that those teams 
are reviewing their docs in their own tree.

This is a preferential suggestion, not a demand. I cannot make you review your 
documentation at any given period.

The ‘month before’ that I refer to would be for introduction of documentation 
and a review period. I will not stop any documentation being pushed to the repo 
unless, of course, it is untested and breaks the installation process.

There's the dev docs, the install guide, and the api reference. Each of these 
are published at docs.o.o, and each have elements that need to be up-to-date 
with a release.

>
> In the last week since we released Ocata, it has become increasingly apparent 
> that the documentation was not updated from the development side. We were not 
> aware of a lot of new enhancements, features, or major bug fixes for certain 
> projects. This means we have released with incorrect/out-of-date 
> documentation. This is not only an unfortunately bad reflection on our team, 
> but on the project teams themselves.
>
> The new inclusion to the schedule may seem unnecessary, but a lot of people 
> rely on this and the PTL drives milestones from this schedule.
>
> From our side, I endeavor to ensure our release managers are working harder 
> to ping and remind doc liaisons and PTLs to ensure the documentation is 
> appropriately updated and working to ensure this does not happen in the 
> future.

Overall, I really like the general concept here. It's very important to have 
good docs. Good docs start with the patch, and we should be encouraging the 
idea of "patch must have both tests and docs before landing".

I’m glad to hear you think so :) this is entirely my thought process.

On a personal note, though, I think I'll find this pretty tough. First, it's 
really hard for me to define when docs are "done", so it's hard to know that 
the docs are "right" at the time of release. Second, docs are built and 
published at each commit, so updating the docs "later, in a follow-on patch" is 
a simple thing to hope for and gives fast feedback, even after a release. (Of 
course the challenge is actually doing the patch later--see my previous 
paragraph.)

So, unfortunately, I can give you no promise this was ever intended to be an 
easy inclusion. But in fairness, this is something teams should have already 
been doing.

However, as a PTL – you already have enough on your plate. We recommend a docs 
liaison that is not the PTL so that the individual is able to dedicate time to 
reviewing the documentation to the best of their ability. The docs being “done” 
= all new features that have a user impact are documented, and “right” = the 
user is able to install $project without major incident.

However, to reiterate my point before – we cannot force any team to do 
anything, but we would like to start actively encouraging the project teams to 
start seeing documentation as an important part of the release process, just as 
they would anything else.

“Treat docs like code” means so much more than just having a contribution 
process that is the same, it means treating the documentation with the same 
importance you would your code.

It all comes down to the user, and if the user cannot install the $project, 
then what do we have?

>
> Thanks,
>
> Alex


> __
> 

Re: [openstack-dev] [OpenStack-docs] [docs][release][ptl] Adding docs to the release schedule

2017-03-02 Thread Alexandra Settle
FYI, there's a few bugs for the Configuration Reference mentioning 
options for some services require updating. I've gone through the doc 
and created additional bugs and included the relevant PTL and docs liaison.


Thank you, Darren! I will review this morning :(

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment][TripleO][kolla][ansible][fuel] Next steps for cross project collaboration

2017-03-02 Thread Steven Hardy
On Thu, Mar 02, 2017 at 05:06:34AM +, Brandon B. Jozsa wrote:
>+1 for the monthly meetings and a long standing, cross-team, collaborative
>etherpad.

Yes the etherpad is a good idea, thanks!

I guess we'll want one per-release, so I created one for Pike here:

https://etherpad.openstack.org/p/deployment-pike

Everyone please feel free to add relevant content/links there, thanks!

I also went ahead and created the WG wiki page:

https://wiki.openstack.org/wiki/Deployment

Which is linked from:

https://wiki.openstack.org/wiki/Category:Working_Groups

I referenced the agreed [deployment] openstack-dev tag, and the new IRC
channel which Steve set up (thanks) #openstack-deployment

Again, please feel free to edit if I missed anything.

Please can anyone wanting to help with organizing (e.g chairing meetings if
we have them, proactively seeking cross-project things to discuss, and
helping with sessions when we meet f2f next time) please add your name and
email to the Deployment Wiki page.

>It’s challenging for some projects who already collaborate heavily with
>communities outside of OpenStack to take on additional heavy meeting
>cycles.

Yeah I think there's enough interest in semi-regular meetings that we may
want to arrange them, but lets see what topics are added (to the etherpad
above), then we can poll for a suitable day/time when there's enough
content?

I think in many cases ML discussion combined with IRC will be enough, but
I'm also happy to arrange a regular monthly meeting if folks feel that will
be worthwhile.

>The PTG deployment cross-team collaboration was really awesome. I can’t
>wait to see what this team is able to do together! Very happy to be a part
>of this effort!

Agreed, I'm really happy to see these first-steps to more effective
collaboration, lets keep it going! :)

Thanks,

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][docs] Is anyone interested in being the docs liaison for Nova?

2017-03-02 Thread Alexandra Settle
Hi!

Anyone who is happy to help out with the liaison role, I am happy to work 
alongside them and ensure they are pointed in the right direction.

As Matt said, the position at least means attending docs meetings, helping to 
review docs patches that are related to nova, helping to alert the docs team of 
big changes coming in a release that will impact the install guide, etc.

We don’t need someone to help correct our writing, we need a technical ‘expert’ 
:)

Alex

From: Zhenyu Zheng 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, March 2, 2017 at 1:22 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [nova][docs] Is anyone interested in being the 
docs liaison for Nova?

Hi,

I'm not a native English speaker but I would like to have a try if possible :)

On Wed, Mar 1, 2017 at 11:45 PM, Matt Riedemann 
> wrote:
There is a need for a liaison from Nova for the docs team to help with 
compute-specific docs in the install guide and various manuals.

For example, we documented placement and cells v2 in the nova devref in Ocata 
but instructions on those aren't in the install guide, so the docs team is 
adding that here [1].

I'm not entirely sure what the docs liaison role consists of, but I assume it 
at least means attending docs meetings, helping to review docs patches that are 
related to nova, helping to alert the docs team of big changes coming in a 
release that will impact the install guide, etc.

From my point of view, I've historically pushed nova developers to be 
documenting new features within the nova devref since it was "closer to home" 
and could be tied to landing said feature in the nova tree, so there was more 
oversight on the docs actually happening *somewhere* rather than a promise to 
work them in the non-nova manuals, which a lot of the time was lip service and 
didn't actually happen once the feature was in. But there is still the need for 
the install guide as the first step to deploying nova so we need to balance 
both things.

If no one else steps up for the docs liaison role, by default it lands on me, 
so I'd appreciate any help here.

[1] https://review.openstack.org/#/c/438328/

--

Thanks,

Matt Riedemann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][infra] does someone cares about Jenkins? I stopped.

2017-03-02 Thread Steven Dake (stdake)
Marcin,

You can submit a bug for issue #1 you raised and fix it yourself or another 
community member can fix it after a bug is filed.

Regarding issue #2, FWIW, I agree local mirroring is the answer.  Kolla needs 
to locally mirror several repositories in OpenStack Infra so the random 
failures we see in the Kolla gate cease to occur.  The review below is one of 
many that needs to be created.  The actual acking of the patch (the second +2) 
requires manual intervention from the openstack-infra team to create an AFS 
share.  My understanding is the openstack-infra team does not have enough 
people that understand AFS mirroring to effectively provide AFS mirroring 
services, however, does provide some AFS mirroring already.  As a result, this 
review needs a rebase and probably different Ubuntu repo IDs to work properly.

I’m done attempting to enable mirroring of the repositories Kolla needs in 
openstack-infra. If you want to take over the mirroring work, feel free.  I 
will commit to guiding you on mirror usage in kolla’s gates if you can get the 
mirrors Kolla needs into the infrastructure.

https://review.openstack.org/#/c/349278/

Regards,
-steve

-Original Message-
From: Marcin Juszkiewicz 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, March 2, 2017 at 2:13 AM
To: "openstack-dev@lists.openstack.org" 
Subject: [openstack-dev] [kolla][infra] does someone cares about Jenkins? I 
stopped.

I am working on some improvements for Kolla. Part of that work is
sending patches for review.

Once patch is set for review.openstack.org there is a set of Jenkins
jobs started to make sure that patch does not break already working
code. And this is good thing.

How it is done is not good ;(

1. Kolla output is nightmare to debug.

There is --logs-dir option to provide separate logs for each image build
but it is not used. IMHO it should be as digging through such logs is
easier.

Several images feels like they try to do install again and again to pass
through what distribution consider a bug - like "user XY already exists"
bugs while Debian/Ubuntu are used as base distro. Which adds several
error messages to check/ignore.


2. Some jobs fail because "sometimes (not always) the gate can't access
some source on the internet".

I spent most of my career on building software. Having builds fail for
such reasons was hardly acceptable when building could take hours. So we
mirrored all sources and used own mirror(s) as fallback. On other
systems I used 10-20GB w3cache to handle that for me (because there was
no way to provide local mirror of used sources).

OpenStack infrastructure lacks any of it. Using "recheck" comment in
review to restart Jenkins jobs is not a solution - how many times it has
to fail to make sure that it is patch's fault not infrastructure one?



As a contributor I started to ignore Jenkins tests. Instead I do builds
on several machines to check does everything works with my patches. If
something does not then I update my patchset.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][blazar][ceilometer][congress][intel-nfv-ci-tests][ironic][manila][networking-bgpvpn][networking-fortinet][networking-sfc][neutron][neutron-fwaas][neutron-lbaas][nova-lxd][octa

2017-03-02 Thread Pavlo Shchelokovskyy
Hi Andrea,

I am not sure why the new tempest.scenario.manager has to be developed that
way. May I humbly suggest another path? Roughly goes like:

1) Rename the present class to "OldScenarioTest", set the original name to
point to it (ScenarioTest=OldScenarioTest) (in a single commit)

2) Create a new class NewScenarioTest (initially a copy of the old one?),
development/refactoring/rewrite happens there, do not merge any new
features in OldScenarioTest

3) add a tempest config option "USE_NEW_SCENARIO_MANAGER", False by default

4) conditionally on the value of USE_NEW_SCENARIO_MANAGER set the
ScenarioTest to point to OldScenarioTest or NewScenarioTest

5) add a gate job(s) to tempest that forces USE_NEW_SCENARIO_MANAGER to
True to test the new code, non-voting from the start, gate on it later

This way the projects using tempest plugins and other tempest consumers
will not be affected at all. Later each project on its jobs using tempest
can try to enforce USE_NEW_SCENARIO_MANAGER to True if wishing to test the
new one or switch to it when it is ready. Eventually when new one is
stable, the config option might be removed and new class renamed to the
name of the original class.

The only downside of such approach I see might be a small? explosion of
number of jobs in the tempest project (running on both new and old
manager), but I'd think at least in the beginning of this refactoring
effort the tempest team would like to have the number of jobs testing the
new manager limited any way.

I might be completely wrong with this advice, as I presume tempest team
have put a lot of thinking into this problem already. But still, could you
consider such approach?

Cheers,

Dr. Pavlo Shchelokovskyy
Senior Software Engineer
Mirantis Inc
www.mirantis.com

On Mon, Feb 27, 2017 at 1:34 PM, Andrea Frittoli 
wrote:

> Hello folks,
>
> TL;DR: if today you import manager,py from tempest.scenario please
> maintain a copy of [0] in tree until further notice.
>
> Full message:
> --
>
> One of the priorities for the QA team in the Pike cycle is to refactor
> scenario tests to a sane code base [1].
>
> As they are now, changes to scenario tests are difficult to develop and
> review, and failures in those tests are hard to debug, which is in many
> directions far away from where we need to be.
>
> The issue we face is that, even though tempest.scenario.manager is not
> advertised as a stable interface in Tempest, many project use it today for
> convenience in writing their own tests. We don't know about dependencies
> outside of the OpenStack ecosystem, but we want to try to make this
> refactor a smooth experience for our uses in OpenStack, and avoid painful
> gate breakages as much as possible.
>
> The process we're proposing is as follows:
> - hold a copy of [0] in tree - in most cases you won't even have to change
> your imports as a lot of projects use tempest/scenario in their code base.
> You may decide to include the bare minimum you need from that module
> instead of all of it. It's a bit more work to make the patch, but less
> un-used code lying around afterwards.
> - the QA team will refactor scenario tests, and make more interfaces
> stable (test.py, credential providers). We won't advertise every single
> change in this process, only when we start and once we're done.
> - you may decide to discard your local copy of manager.py and consume
> Tempest stable interfaces directly. We will help with any question you may
> have on the process and on Tempest interfaces.
>
> Repositories affected by the refactor are (based on [2]):
>
> blazar,ceilometer,congress,intel-nfv-ci-tests,ironic,
> manila,networking-bgpvpn,networking-fortinet,networking-sfc,neutron-fwaas,
> neutron-lbaas,nova-lxd,octavia,sahara-tests,tap-as-a-
> service,tempest-horizon,vmware-nsx,watcher
>
> If we don't hear from a team at all in the next two weeks, we will assume
> that the corresponding Tempest plugin / bunch of tests is not in use
> anymore, and ignore it. If you use tempest.scenario.manager.py today and
> your repo is not on the list, please let us know!
>
> I'm happy to propose an initial patch for any team that may require it -
> just ping me on IRC (andreaf).
> I won't have the bandwidth myself to babysit each patch through review and
> gate though.
>
> Thank you for your cooperation and patience!
>
> Andrea
>
> [0] http://git.openstack.org/cgit/openstack/tempest/tree/
> tempest/scenario/manager.py
> [1] https://etherpad.openstack.org/p/pike-qa-priorities
> [2] https://github.com/andreafrittoli/tempest_stable_
> interfaces/blob/master/data/get_deps.sh
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

[openstack-dev] [kolla][infra] does someone cares about Jenkins? I stopped.

2017-03-02 Thread Marcin Juszkiewicz
I am working on some improvements for Kolla. Part of that work is
sending patches for review.

Once patch is set for review.openstack.org there is a set of Jenkins
jobs started to make sure that patch does not break already working
code. And this is good thing.

How it is done is not good ;(

1. Kolla output is nightmare to debug.

There is --logs-dir option to provide separate logs for each image build
but it is not used. IMHO it should be as digging through such logs is
easier.

Several images feels like they try to do install again and again to pass
through what distribution consider a bug - like "user XY already exists"
bugs while Debian/Ubuntu are used as base distro. Which adds several
error messages to check/ignore.


2. Some jobs fail because "sometimes (not always) the gate can't access
some source on the internet".

I spent most of my career on building software. Having builds fail for
such reasons was hardly acceptable when building could take hours. So we
mirrored all sources and used own mirror(s) as fallback. On other
systems I used 10-20GB w3cache to handle that for me (because there was
no way to provide local mirror of used sources).

OpenStack infrastructure lacks any of it. Using "recheck" comment in
review to restart Jenkins jobs is not a solution - how many times it has
to fail to make sure that it is patch's fault not infrastructure one?



As a contributor I started to ignore Jenkins tests. Instead I do builds
on several machines to check does everything works with my patches. If
something does not then I update my patchset.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tricircle]Nominating Victor Morales as Tricircle Core

2017-03-02 Thread cr_...@126.com
+1

Thanks for his great contribution.

Best Regards
Ronghui Cao__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] PBR 2.0.0 release *may* cause gate failures

2017-03-02 Thread Tony Breeds
On Thu, Mar 02, 2017 at 06:24:32PM +1100, Tony Breeds wrote:

> I know I'm talking to myself .

Still.

> A project on $branch without constraints is going to get pbr 2.0.0 and then 
> hit
> version conflicts with projects that have pbr <2.0.0 caps *anyway* regardless
> of what hacking says right?
> 
> So removing the pbr cap in hacking doesn't make things worse for stable
> branches but it does make things better for master?

I think the 0.10.3 hacking release is the best way forward but in the mean time
a quick project by project fix is to just update hacking.

See Ian's patch at: https://review.openstack.org/#/c/440010/

I'm a little unwilling to create all the updates but I make look at that after 
dinner.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev