Re: [openstack-dev] [tripleo] critical situation with CI / upgrade jobs

2017-08-16 Thread Steve Baker
On Thu, Aug 17, 2017 at 10:47 AM, Emilien Macchi  wrote:

>
> > Problem #3: from Ocata to Pike: all container images are
> > uploaded/specified, even for services not deployed
> > https://bugs.launchpad.net/tripleo/+bug/1710992
> > The CI jobs are timeouting during the upgrade process because
> > downloading + uploading _all_ containers in local cache takes more
> > than 20 minutes.
> > So this is where we are now, upgrade jobs timeout on that. Steve Baker
> > is currently looking at it but we'll probably offer some help.
>
> Steve is still working on it: https://review.openstack.org/#/c/448328/
> Steve, if you need any help (reviewing or coding) - please let us
> know, as we consider this thing important to have and probably good to
> have in Pike.
>

I have a couple of changes up now, one to capture the relationship between
images and services[1], and another to add an argument to the prepare
command to filter the image list based on which services are containerised
[2]. Once these land, all the calls to prepare in CI can be modified to
also specify these heat environment files, and this will reduce uploads to
only the images required.

[1] https://review.openstack.org/#/c/448328/
[2] https://review.openstack.org/#/c/494367/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] critical situation with CI / upgrade jobs

2017-08-16 Thread Emilien Macchi
On Wed, Aug 16, 2017 at 3:47 PM, Emilien Macchi  wrote:
> Here's an update on the situation.
>
> On Tue, Aug 15, 2017 at 6:33 PM, Emilien Macchi  wrote:
>> Problem #1: Upgrade jobs timeout from Newton to Ocata
>> https://bugs.launchpad.net/tripleo/+bug/1702955
> [...]
>
> - revert distgit patch in RDO: https://review.rdoproject.org/r/8575
> - push https://review.openstack.org/#/c/494334/ as a temporary solution
> - we need https://review.openstack.org/#/c/489874/ landed ASAP.
> - once https://review.openstack.org/#/c/489874/ is landed, we need to
> revert https://review.openstack.org/#/c/494334 ASAP.
>
> We still need some help to find out why upgrade jobs timeout so much
> in stable/ocata.
>
>> Problem #2: from Ocata to Pike (containerized) missing container upload step
>> https://bugs.launchpad.net/tripleo/+bug/1710938
>> Wes has a patch (thanks!) that is currently in the gate:
>> https://review.openstack.org/#/c/493972
> [...]
>
> The patch worked and helped! We've got a successful job running today:
> http://logs.openstack.org/00/461000/32/check/gate-tripleo-ci-centos-7-containers-multinode-upgrades-nv/2f13627/console.html#_2017-08-16_01_31_32_009061
>
> We're now pushing to the next step: testing the upgrade with pingtest.
> See https://review.openstack.org/#/c/494268/ and the Depends-On: on
> https://review.openstack.org/#/c/461000/.
>
> If pingtest proves to work, it would be a good news and prove that we
> have a basic workflow in place on which we can iterate.

Pingtest doesn't work:
http://logs.openstack.org/00/461000/37/check/gate-tripleo-ci-centos-7-containers-multinode-upgrades-nv/1beac0e/logs/undercloud/home/jenkins/overcloud_validate.log.txt.gz#_2017-08-17_01_03_09

We need to investigate and find out why.
If nobody looks at it before, I'll take a change tomorrow.

> The next iterations afterward would be to work on the 4 scenarios that
> are also going to be upgrades from Ocata to pike (001 to 004).
> For that, we'll need Problem #1 and #2 resolved before we want to make
> any progress here, to not hit the same issues that before.
>
>> Problem #3: from Ocata to Pike: all container images are
>> uploaded/specified, even for services not deployed
>> https://bugs.launchpad.net/tripleo/+bug/1710992
>> The CI jobs are timeouting during the upgrade process because
>> downloading + uploading _all_ containers in local cache takes more
>> than 20 minutes.
>> So this is where we are now, upgrade jobs timeout on that. Steve Baker
>> is currently looking at it but we'll probably offer some help.
>
> Steve is still working on it: https://review.openstack.org/#/c/448328/
> Steve, if you need any help (reviewing or coding) - please let us
> know, as we consider this thing important to have and probably good to
> have in Pike.
>
> Thanks,
> --
> Emilien Macchi



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Mogan]Tasks update of Mogan Project

2017-08-16 Thread hao wang
Hi,

We are glad to present this week's tasks update of Mogan.

We have finish a lot of wonderful work in Pike release, and Mogan will
have her first release!

See the details below:

Essential Priorities
==

1. Node aggregates (liudong, zhangyang, zhenguo)

blueprint: https://blueprints.launchpad.net/mogan/+spec/node-aggregate

spec: https://review.openstack.org/#/c/470927/

code:

Adds aggregates DB model and API https://review.openstack.org/#/c/482786/ merged

Add aggregate object https://review.openstack.org/#/c/484630/  merged

Add aggregate API https://review.openstack.org/#/c/484690/9 merged

Retrieve availability zone from aggregate
https://review.openstack.org/#/c/485506/ merged

Add node list support https://review.openstack.org/#/c/486016/ merged

Add aggregate nodes API https://review.openstack.org/#/c/487284/ merged

Add aggregates tests https://review.openstack.org/#/c/488296/ merged

2. Server groups and scheduler hints(liudong, liusheng)

blueprint: 
https://blueprints.launchpad.net/mogan/+spec/server-group-api-extension
 https://blueprints.launchpad.net/mogan/+spec/support-schedule-hints

spec:https://review.openstack.org/#/c/489541/

code:

scheduler hints: https://review.openstack.org/#/c/463534/

server groups: https://review.openstack.org/488298

https://review.openstack.org/488909

https://review.openstack.org/489850

https://review.openstack.org/#/c/490328/


3. Adopt servers (wanghao, litao)

blueprint: https://blueprints.launchpad.net/mogan/+spec/manage-existing-bms

spec: https://review.openstack.org/#/c/459967/ merged

code: https://review.openstack.org/#/c/479660/

  https://review.openstack.org/#/c/481544/


4. Valence integration (zhenguo, shaohe, luyao, Xinran)  Move to next cycle.

blueprint: https://blueprints.launchpad.net/mogan/+spec/valence-integration

spec: 
https://review.openstack.org/#/c/441790/3/specs/pike/approved/valence-integration.rst

No updates


5. Support boot-from-volume in Mogan(wanghao, zhenguo)

blueprint: 
https://blueprints.launchpad.net/mogan/+spec/support-boot-from-volume-in-mogan

code: https://review.openstack.org/#/c/489455/


Optional Priorities
==

1. support runnig api server under uwsgi

https://review.openstack.org/#/c/482057/ merged


2/ Add more tests coverage (liusheng, zhenguo)

unit test:

functional test:

tempest(finished):

https://review.openstack.org/#/c/474835

https://review.openstack.org/#/c/474829

https://review.openstack.org/#/c/474498

https://review.openstack.org/#/c/473760

https://review.openstack.org/#/c/473196

https://review.openstack.org/#/c/471246


3. Documentation (zhenguo, liusheng)

Add states and transitons diagram  https://review.openstack.org/471293

Add sample config and policy files to mogan docs
https://review.openstack.org/471637

add documentation about testing https://review.openstack.org/#/c/472028


4. Add quota for more resouces (wanghao)

blueprint: https://blueprints.launchpad.net/mogan/+spec/quota-support

code: https://review.openstack.org/#/c/485461/ for keypairs merged

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][elections] Project Team Lead Election Conclusion and Results

2017-08-16 Thread Kendall Nelson
Hello Everyone!

Thank you to the electorate, to all those who voted and to all
candidates who put their name forward for Project Team Lead (PTL) in
this election. A healthy, open process breeds trust in our decision
making capability thank you to all those who make this process
possible.

Now for the results of the PTL election process, please join me in
extending congratulations to the following PTLs:
 * Barbican  : Dave McCowan
 * Chef OpenStack: Samuel Cassiba
 * Cinder: Jay Bryant
 * Cloudkitty: Christophe Sauthier
 * Congress  : Eric Kao
 * Designate : Graham Hayes
 * Documentation : Petr Kovar
 * Dragonflow: Omer Anson
 * Ec2 Api   : Andrey Pavlov
 * Freezer   : Saad Zaher
 * Glance: Brian Rosmaita
 * Heat  : Rico Lin
 * Horizon   : Ying Zuo
 * I18n  : Frank Kloeker
 * Infrastructure: Clark Boylan
 * Ironic: Dmitry Tantsur
 * Karbor: Chenying Chenying
 * Keystone  : Lance Bragstad
 * Kolla : Michal Jastrzebski
 * Kuryr : Antoni Segura Puimedon
 * Magnum: Spyros Trigazis
 * Manila: Ben Swartzlander
 * Mistral   : Renat Renat
 * Monasca   : Witold Bedyk
 * Murano: Zhurong Zhurong
 * Neutron   : Kevin Benton
 * Nova  : Matt Riedemann
 * Octavia   : Michael Johnson
 * OpenStackAnsible  : Jean-Philippe Evrard
 * OpenStackClient   : Dean Troyer
 * OpenStack Charms  : James Page
 * Oslo  : ChangBo Guo
 * Packaging Rpm : Thomas Bechtold
 * Puppet OpenStack  : Mohammed Naser
 * Quality Assurance : Andrea Frittoli
 * Rally : Andrey Kurilin
 * RefStack  : Chris Hoge
 * Release Management: Sean McGinnis
 * Requirements  : Matthew Thode
 * Sahara: Telles Mota Vidal Nóbrega
 * Searchlight   : Steve McLellan
 * Security  : Luke Hinds
 * Senlin: RUIJIE YUAN
 * Shade : Monty Taylor
 * Solum : Zhurong Zhurong
 * Stable Branch Maintenance : Tony Breeds
 * Storlets  : Kota Tsuyuzaki
 * Swift : John Dickinson
 * Tacker: Gongysh Gongysh
 * Telemetry : Gordon Chung
 * Tricircle : Zhiyuan Cai
 * Tripleo   : Alex Schultz
 * Trove : Amrith Kumar
 * Vitrage   : Ifat Afek
 * Watcher   : Alexander Chadin
 * Winstackers   : Claudiu Belu
 * Zaqar : Feilong Wang
 * Zun   : Hongbin Lu

Elections:
* Documentation:
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_d5d9fb5a2354e2a0
* Ironic: http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_0fb06bb4edfd3d08
Election process details and results are also available here:
https://governance.openstack.org/election/

Thank you to all involved in the PTL election process,

- Kendall Nelson(diablo_rojo)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][keystone] keystoneauth1 and keystonemiddle setting

2017-08-16 Thread Morgan Fainberg
On Aug 16, 2017 11:31, "Brant Knudson"  wrote:



On Mon, Aug 14, 2017 at 2:48 AM, Chen CH Ji  wrote:

> In fixing bug 1704798, there's a proposed patch
> https://review.openstack.org/#/c/485121/7
> but we stuck at http_connection_timeout and timeout value in keystoneauth1
> and keystonemiddle repo
>
> basically we want to reuse the keystone_auth section in nova.conf to avoid
> create another section so we can
> use following to create a session
>
> sess = ks_loading.load_session_from_conf_options(CONF,
> 'keystone_authtoken', auth=context.get_auth_plugin())
>
> any comments or we have to create another section and configure it anyway?
> thanks
>
>
> Best Regards!
>
> Kevin (Chen) Ji 纪 晨
>
> Engineer, zVM Development, CSTL
> Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com
> Phone: +86-10-82451493 <+86%2010%208245%201493>
> Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
> Beijing 100193, PRC
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
I think reusing the keystone_authtoken config is a bad idea.
keystone_authtoken contains the configuration for the auth_token middleware
so this is what we keystone developers expect it to be used for. A
deployment may have different security needs for the auth_token middleware
vs checking quotas in which case they'll need different users or project
for the auth_token middleware and quota checking. And even if we don't need
it now we might need it in the future, and it's going to create a lot of
work going forward to rearchitect.

If a deployer wants to use the same authentication for both auth_token
middleware and the proxy, they can create a new section with the config and
point both keystone_authtoken and quota checking to it (by setting the
auth_section).

-- 
- Brant

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



What Brant said. Please do not lean on the options from keystone middleware
for anything outside of keystone middleware. We have had to change these
options before and those changes should only ever impact the keystone
middleware code. If you re-use those options for something in Nova, it will
likely break and need to be split into it's own option block in the future.

Please create a new option block (even if a deployers uses the same
user/passord) rather than using the authtoken config section for anything
outside of authtoken.

--Morgan
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] critical situation with CI / upgrade jobs

2017-08-16 Thread Emilien Macchi
Here's an update on the situation.

On Tue, Aug 15, 2017 at 6:33 PM, Emilien Macchi  wrote:
> Problem #1: Upgrade jobs timeout from Newton to Ocata
> https://bugs.launchpad.net/tripleo/+bug/1702955
[...]

- revert distgit patch in RDO: https://review.rdoproject.org/r/8575
- push https://review.openstack.org/#/c/494334/ as a temporary solution
- we need https://review.openstack.org/#/c/489874/ landed ASAP.
- once https://review.openstack.org/#/c/489874/ is landed, we need to
revert https://review.openstack.org/#/c/494334 ASAP.

We still need some help to find out why upgrade jobs timeout so much
in stable/ocata.

> Problem #2: from Ocata to Pike (containerized) missing container upload step
> https://bugs.launchpad.net/tripleo/+bug/1710938
> Wes has a patch (thanks!) that is currently in the gate:
> https://review.openstack.org/#/c/493972
[...]

The patch worked and helped! We've got a successful job running today:
http://logs.openstack.org/00/461000/32/check/gate-tripleo-ci-centos-7-containers-multinode-upgrades-nv/2f13627/console.html#_2017-08-16_01_31_32_009061

We're now pushing to the next step: testing the upgrade with pingtest.
See https://review.openstack.org/#/c/494268/ and the Depends-On: on
https://review.openstack.org/#/c/461000/.

If pingtest proves to work, it would be a good news and prove that we
have a basic workflow in place on which we can iterate.

The next iterations afterward would be to work on the 4 scenarios that
are also going to be upgrades from Ocata to pike (001 to 004).
For that, we'll need Problem #1 and #2 resolved before we want to make
any progress here, to not hit the same issues that before.

> Problem #3: from Ocata to Pike: all container images are
> uploaded/specified, even for services not deployed
> https://bugs.launchpad.net/tripleo/+bug/1710992
> The CI jobs are timeouting during the upgrade process because
> downloading + uploading _all_ containers in local cache takes more
> than 20 minutes.
> So this is where we are now, upgrade jobs timeout on that. Steve Baker
> is currently looking at it but we'll probably offer some help.

Steve is still working on it: https://review.openstack.org/#/c/448328/
Steve, if you need any help (reviewing or coding) - please let us
know, as we consider this thing important to have and probably good to
have in Pike.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Multi-region support in shared Keystone service deployment

2017-08-16 Thread Lingxian Kong
Hi, Horizon developers,

In our OpenStack based public cloud(Catalyst Cloud), Keystone is a shared
identity service across 3 regions, our customers have been asking for the
feature that they could select their preferred region when they log in
Horizon, rather than switching region each time after login.

Unfortunately, the existing 'AVAILABLE_REGIONS' only works with
multi-keystone, multi-region environment, so for backward compatibility and
getting rid of potential confusion, a new config option named
'AVAILABLE_SERVICE_REGIONS' was introduced in my patch[1][2], the setting
is supposed to be configured by the cloud operators and the
'AVAILABLE_REGIONS' setting will take precedence over
'AVAILABLE_SERVICE_REGIONS'.

I am sending this email to ask for more feedback, and do I need to propose
a feature spec before the code is actually being reviewed?

[1]: https://review.openstack.org/#/c/494083/
[2]: https://review.openstack.org/#/c/494059/

Cheers,
Lingxian Kong (Larry)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] [telemetry] Ceilometer stable/pike branch outlook

2017-08-16 Thread Tony Breeds
On Wed, Aug 16, 2017 at 02:37:50PM -0400, William M Edmonds wrote:
> 
> Julien Danjou  wrote on 08/16/2017 02:13:10 PM:
> > From: Julien Danjou 
> > To: "Eric S Berglund" 
> > Cc: openstack-dev@lists.openstack.org
> > Date: 08/16/2017 02:14 PM
> > Subject: Re: [openstack-dev] [release] [telemetry] Ceilometer
> > stable/pike branch outlook
> >
> > On Wed, Aug 16 2017, Eric S Berglund wrote:
> >
> > Hi Eric,
> >
> > > Is there an outlook for cutting a pike branch for ceilometer?
> > > We currently can't run our 3rd party CI against pike without a pike
> > > release branch and are deciding whether it's worth the time to
> > > implement a workaround.
> >
> > AFAIU it's impossible to cut a branch for our projects and release a rc1
> > because of the release model we use. The release team does not allow us
> > to do that. We need to release directly a stable version and cut a
> > branch.
> >
> > I guess we'll do that in a couple of week, at release time.
> 
> That doesn't fit my understanding of cycle-with-intermediary, which is the
> the ceilometer release model per [0]. As I read the release model
> definitions [1], cycle-with-intermediary means that you can have
> intermediate releases *as well*, but you still have to have a cycle-ending
> release in line with the projects using the cycle-with-milestones model.
> 
> Can someone on the release team clarify this for us?

That's correct the bit you're missing is cycle-with-intermediary doesn't
have pre-releases (b{1,2,3},rc{1,2}) so when the ceilometer team feels
they have the code in shape for a release they'll tag that release and
cut a stable/pike branch at the tag point.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [User-committee] [publiccloud-wg] Reminder meeting PublicCloudWorkingGroup

2017-08-16 Thread Tobias Rydberg
For those of you who couldn't attend todays meeting...read up logs below, 
comment on the notes at the agendaandcontinue the discussion at IRC 
channel #openstack-publiccloud .

http://eavesdrop.openstack.org/meetings/publiccloud_wg/2017/publiccloud_wg.2017-08-16-14.00.log.html

https://etherpad.openstack.org/p/publiccloud-wg

Regards,
Tobias

> 16 aug. 2017 kl. 11:08 skrev Tobias Rydberg :
> 
> Hi everyone, 
> 
> Don't forget todays meeting for the PublicCloudWorkingGroup. 
> 1400 UTC in IRC channel #openstack-meeting-3 
> 
> Etherpad: https://etherpad.openstack.org/p/publiccloud-wg 
> 
> Regards, 
> Tobias Rydberg
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] [telemetry] Ceilometer stable/pike branch outlook

2017-08-16 Thread William M Edmonds

Julien Danjou  wrote on 08/16/2017 02:13:10 PM:
> From: Julien Danjou 
> To: "Eric S Berglund" 
> Cc: openstack-dev@lists.openstack.org
> Date: 08/16/2017 02:14 PM
> Subject: Re: [openstack-dev] [release] [telemetry] Ceilometer
> stable/pike branch outlook
>
> On Wed, Aug 16 2017, Eric S Berglund wrote:
>
> Hi Eric,
>
> > Is there an outlook for cutting a pike branch for ceilometer?
> > We currently can't run our 3rd party CI against pike without a pike
> > release branch and are deciding whether it's worth the time to
> > implement a workaround.
>
> AFAIU it's impossible to cut a branch for our projects and release a rc1
> because of the release model we use. The release team does not allow us
> to do that. We need to release directly a stable version and cut a
> branch.
>
> I guess we'll do that in a couple of week, at release time.

That doesn't fit my understanding of cycle-with-intermediary, which is the
the ceilometer release model per [0]. As I read the release model
definitions [1], cycle-with-intermediary means that you can have
intermediate releases *as well*, but you still have to have a cycle-ending
release in line with the projects using the cycle-with-milestones model.

Can someone on the release team clarify this for us?

[0]
https://github.com/openstack/releases/blob/bf890914c1ec5bcd41d70140e80ef8d39df64c86/deliverables/pike/ceilometer.yaml#L3
[1] https://releases.openstack.org/reference/release_models.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][keystone] keystoneauth1 and keystonemiddle setting

2017-08-16 Thread Brant Knudson
On Mon, Aug 14, 2017 at 2:48 AM, Chen CH Ji  wrote:

> In fixing bug 1704798, there's a proposed patch
> https://review.openstack.org/#/c/485121/7
> but we stuck at http_connection_timeout and timeout value in keystoneauth1
> and keystonemiddle repo
>
> basically we want to reuse the keystone_auth section in nova.conf to avoid
> create another section so we can
> use following to create a session
>
> sess = ks_loading.load_session_from_conf_options(CONF,
> 'keystone_authtoken', auth=context.get_auth_plugin())
>
> any comments or we have to create another section and configure it anyway?
> thanks
>
>
> Best Regards!
>
> Kevin (Chen) Ji 纪 晨
>
> Engineer, zVM Development, CSTL
> Notes: Chen CH Ji/China/IBM@IBMCN Internet: jiche...@cn.ibm.com
> Phone: +86-10-82451493 <+86%2010%208245%201493>
> Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District,
> Beijing 100193, PRC
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
I think reusing the keystone_authtoken config is a bad idea.
keystone_authtoken contains the configuration for the auth_token middleware
so this is what we keystone developers expect it to be used for. A
deployment may have different security needs for the auth_token middleware
vs checking quotas in which case they'll need different users or project
for the auth_token middleware and quota checking. And even if we don't need
it now we might need it in the future, and it's going to create a lot of
work going forward to rearchitect.

If a deployer wants to use the same authentication for both auth_token
middleware and the proxy, they can create a new section with the config and
point both keystone_authtoken and quota checking to it (by setting the
auth_section).

-- 
- Brant
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release] [telemetry] Ceilometer stable/pike branch outlook

2017-08-16 Thread Julien Danjou
On Wed, Aug 16 2017, Eric S Berglund wrote:

Hi Eric,

> Is there an outlook for cutting a pike branch for ceilometer?
> We currently can't run our 3rd party CI against pike without a pike
> release branch and are deciding whether it's worth the time to
> implement a workaround.

AFAIU it's impossible to cut a branch for our projects and release a rc1
because of the release model we use. The release team does not allow us
to do that. We need to release directly a stable version and cut a
branch.

I guess we'll do that in a couple of week, at release time.

-- 
Julien Danjou
// Free Software hacker
// https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release] [telemetry] Ceilometer stable/pike branch outlook

2017-08-16 Thread Eric S Berglund
 
Hi all,
 
Is there an outlook for cutting a pike branch for ceilometer?
We currently can't run our 3rd party CI against pike without a pike
release branch and are deciding whether it's worth the time to
implement a workaround.
 
Regards,
Eric Berglund
-
E-mail: esber...@us.ibm.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][tc][all] Trove restart - next steps

2017-08-16 Thread Tim Bell

Thanks for the info.

Can you give a summary for reasons for why this was not a viable approach?

Tim

From: Amrith Kumar 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, 15 August 2017 at 23:09
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [trove][tc][all] Trove restart - next steps

Tim,
This is an idea that was discussed at a trove midcycle a long time back (Juno 
midcycle, 2014). It came up briefly in the Kilo midcycle as well but was 
quickly rejected again.
I've added it to the list of topics for discussion at the PTG. If others want 
to add topics to that list, the etherpad is at 
https://etherpad.openstack.org/p/trove-queens-ptg​

Thanks!

-amrith


On Tue, Aug 15, 2017 at 12:43 PM, Tim Bell 
> wrote:
One idea I found interesting from the past discussion was the approach that the 
user need is a database with a connection string.

How feasible is the approach where we are provisioning access to a multi-tenant 
database infrastructure rather than deploying a VM with storage and installing 
a database?

This would make the service delivery (monitoring, backup, upgrades) in the 
responsibility of the cloud provider rather than the end user. Some 
quota/telemetry would be needed to allocate costs to the project.

Tim

From: Amrith Kumar >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday, 15 August 2017 at 17:44
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [trove][tc][all] Trove restart - next steps

Now that we have successfully navigated the Pike release and branched
the tree, I would like to restart the conversation about how to revive
and restart the Trove project.

Feedback from the last go around on this subject[1] resulted in a
lively discussion which I summarized in [2]. The very quick summary is
this, there is interest in Trove, there is a strong desire to maintain
a migration path, there is much that remains to be done to get there.

What didn't come out of the email discussion was any concrete and
tangible uptick in the participation in the project, promises
notwithstanding.

There have however been some new contributors who have been submitting
patches and to help channel their efforts, and any additional
assistance that we may receive, I have created the (below) list of
priorities for the project. These will also be the subject of
discussion at the PTG in Denver.

   - Fix the gate

   - Update currently failing jobs, create xenial based images
   - Fix gate jobs that have gone stale (non-voting, no one paying
 attention)

   - Bug triage

   - Bugs in launchpad are really out of date, assignments to
 people who are no longer active, bugs that are really support
 requests, etc.,
   - Prioritize fixes for Queens and beyond

   - Get more active reviewers

   - There seems to still be a belief that 'contributing' means
 'fixing bugs'. There is much more value in actually doing
 reviews.
   - Get at least a three member active core review team by the
 end of the year.

   - Complete Python 3 support

  - Currently not complete; especially on the guest side

   - Community Goal, migrate to oslo.policy

   - Anything related to new features

This is clearly an opinionated list, and is open to change but I'd
like to do that based on the Agile 'stand up' meeting rules. You know, the 
chicken and pigs thing :)

So, if you'd like to get on board, offer suggestions to change this
list, and then go on to actually implement those changes, c'mon over.
-amrith



[1] http://openstack.markmail.org/thread/wokk73ecv44ipfjz
[2] http://markmail.org/message/gfqext34xh5y37ir

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposal to require bugs for tech debt

2017-08-16 Thread Alex Schultz
On Wed, Aug 16, 2017 at 8:24 AM, Markus Zoeller
 wrote:
> On 16.08.2017 02:59, Emilien Macchi wrote:
>> On Tue, Aug 15, 2017 at 5:46 PM, Alex Schultz  wrote:
>>> Hey folks,
>>>
>>> I'm proposing that in order to track tech debt that we're adding in as
>>> part of development that we create a way to track these items and not
>>> approve them without a bug (and a reference to said bug)[0].  Please
>>> take a moment to review the proposed policy and comment. I would like
>>> to start this for the queens cycle.
>>
>> I also think we should frequently review the status of these bugs.
>> Maybe unofficially from time to time and officially during milestone-3
>> of each cycle.
>>
>> I like the proposal so far, thanks.
>>
>
> FWIW, for another (in-house) project, I create a page called "technical
> debt" in the normal docs directory of the project. That way, I can add
> the "reminder" with the same commit which introduced the technical debt
> in the code. Similar to what OpenStack already does with the
> release-notes. The list of technical debt items is then always visible
> in the docs and not a query in the bug-tracker with tags (or something
> like that).
> Just an idea, maybe it applicable here.
>

Yea that would a good choice if we only had a single or a low number
of projects under the tripleo umbrella. The problem is we have many
different components which contribute to tech debt so storing it in
each repo would be hard to track. I proposed bugs because it would be
a singular place for reporting. For projects with fewer deliverable
storing it like release notes is a good option.

Thanks,
-Alex

> --
> Regards, Markus Zoeller (markus_z)
>
>>> A real world example of where this would beneficial would be the
>>> workaround we had for buggy ssh[1]. This patch was merged 6 months ago
>>> to work around an issue in ssh that was recently fixed. However we
>>> would most likely never have remembered to revert this. It was only
>>> because someone[2] spotted it and mentioned it that it is being
>>> reverted now.
>>>
>>> Thanks,
>>> -Alex
>>>
>>> [0] https://review.openstack.org/#/c/494044/
>>> [1] 
>>> https://review.openstack.org/#/q/6e8e27488da31b3b282fe1ce5e07939b3fa11b2f,n,z
>>> [2] Thanks pabelanger
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [senlin] senlin 4.0.0.0rc1 (pike)

2017-08-16 Thread liu.xuefeng1
Hello everyone,




If you find an issue that could be considered release-critical, please


feel free to file it at:https://bugs.launchpad.net/senlin 

Also,  you can discuss with senlin team on #senlin IRC channel or attend 
senlin's weekly meeting:1300UTC every Tuesday on #openstack-meeting channel. 




Best Regrads

XueFeng 










原始邮件



发件人: 
收件人: 
日 期 :2017年08月11日 09:23
主 题 :[openstack-dev] [senlin] senlin 4.0.0.0rc1 (pike)






Hello everyone,

A new release candidate for senlin for the end of the Pike
cycle is available!  You can find the source code tarball at:

https://tarballs.openstack.org/senlin/

Unless release-critical issues are found that warrant a release
candidate respin, this candidate will be formally released as the
final Pike release. You are therefore strongly
encouraged to test and validate this tarball!

Alternatively, you can directly test the stable/pike release
branch at:

http://git.openstack.org/cgit/openstack/senlin/log/?h=stable/pike

Release notes for senlin can be found at:

http://docs.openstack.org/releasenotes/senlin/

If you find an issue that could be considered release-critical, please
file it at:

https://bugs.launchpad.net/senlin

and tag it *pike-rc-potential* to bring it to the senlin
release crew's attention.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.messaging] Proposing Andy Smith as an oslo.messaging core reviewers

2017-08-16 Thread Ken Giusti
+1

On Mon, Aug 14, 2017 at 6:59 AM, ChangBo Guo  wrote:

> I propose that we add Andy Smith to the oslo.messaging team.
>
> Andy Smith has been actively contributing to oslo.messaging for a while
> now, both
> in helping make oslo.messaging better via code contribution(s) and by
> helping with
> the review load when he can. He's been involved on the AMQP 1.0 side for
> awhile. He's really interested in taking ownership of the experimental
> Kafka driver, which would be great to have someone able to drive that.
>
> Please respond with +1/-1
>
> Voting will last 2 weeks and will end at 28th of August.
>
> Cheers,
> --
> ChangBo Guo(gcb)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Ken Giusti  (kgiu...@gmail.com)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Queens PTG: Denver Hotel Information

2017-08-16 Thread Erin Disney
Hey all- 

The PTG is quickly approaching, and we wanted to provide an update on hotel 
room availability in Denver. As we mentioned in previous mailing list 
communications, the negotiated hotel block at the Renaissance Stapleton was 
extremely limited, and was based on the number of rooms that were sold in the 
Atlanta hotel block back in February. The Denver block is officially sold out, 
though the hotel has still rooms available at a higher rate than even a week 
ago. 

We understand that affordable housing is important to the Community, and wanted 
to provide a list of nearby hotels for anyone looking for a lower rate than is 
currently available at the Renaissance where the PTG meetings will be located 
(all prices are per night and based on a Hotels.com  search 
this morning for a room 9/11-9/15):

Super 8 Denver Stapleton: $93
Holiday inn Denver East-Stapleton: $180
DoubleTree by Hilton Hotel Denver: $159
DoubleTree by Hilton Hotel Denver- Stapleton North: $159
Courtyard by Marriott Denver Stapleton: $169
Comfort Inn and Suites Stapleton: $98
Drury Inn and Suites Denver-Stapleton: $172

Let me know if you have any questions. Also, don’t forget to register here if 
you haven’t already! Looking forward to seeing everyone in Denver! 

Erin Disney
OpenStack Marketing
e...@openstack.org 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Proposal to require bugs for tech debt

2017-08-16 Thread Markus Zoeller
On 16.08.2017 02:59, Emilien Macchi wrote:
> On Tue, Aug 15, 2017 at 5:46 PM, Alex Schultz  wrote:
>> Hey folks,
>>
>> I'm proposing that in order to track tech debt that we're adding in as
>> part of development that we create a way to track these items and not
>> approve them without a bug (and a reference to said bug)[0].  Please
>> take a moment to review the proposed policy and comment. I would like
>> to start this for the queens cycle.
> 
> I also think we should frequently review the status of these bugs.
> Maybe unofficially from time to time and officially during milestone-3
> of each cycle.
> 
> I like the proposal so far, thanks.
> 

FWIW, for another (in-house) project, I create a page called "technical
debt" in the normal docs directory of the project. That way, I can add
the "reminder" with the same commit which introduced the technical debt
in the code. Similar to what OpenStack already does with the
release-notes. The list of technical debt items is then always visible
in the docs and not a query in the bug-tracker with tags (or something
like that).
Just an idea, maybe it applicable here.

-- 
Regards, Markus Zoeller (markus_z)

>> A real world example of where this would beneficial would be the
>> workaround we had for buggy ssh[1]. This patch was merged 6 months ago
>> to work around an issue in ssh that was recently fixed. However we
>> would most likely never have remembered to revert this. It was only
>> because someone[2] spotted it and mentioned it that it is being
>> reverted now.
>>
>> Thanks,
>> -Alex
>>
>> [0] https://review.openstack.org/#/c/494044/
>> [1] 
>> https://review.openstack.org/#/q/6e8e27488da31b3b282fe1ce5e07939b3fa11b2f,n,z
>> [2] Thanks pabelanger
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] critical situation with CI / upgrade jobs

2017-08-16 Thread Paul Belanger
On Tue, Aug 15, 2017 at 11:06:20PM -0400, Wesley Hayutin wrote:
> On Tue, Aug 15, 2017 at 9:33 PM, Emilien Macchi  wrote:
> 
> > So far, we're having 3 critical issues, that we all need to address as
> > soon as we can.
> >
> > Problem #1: Upgrade jobs timeout from Newton to Ocata
> > https://bugs.launchpad.net/tripleo/+bug/1702955
> > Today I spent an hour to look at it and here's what I've found so far:
> > depending on which public cloud we're running the TripleO CI jobs, it
> > timeouts or not.
> > Here's an example of Heat resources that run in our CI:
> > https://www.diffchecker.com/VTXkNFuk
> > On the left, resources on a job that failed (running on internap) and
> > on the right (running on citycloud) it worked.
> > I've been through all upgrade steps and I haven't seen specific tasks
> > that take more time here or here, but some little changes that make
> > the big change at the end (so hard to debug).
> > Note: both jobs use AFS mirrors.
> > Help on that front would be very welcome.
> >
> >
> > Problem #2: from Ocata to Pike (containerized) missing container upload
> > step
> > https://bugs.launchpad.net/tripleo/+bug/1710938
> > Wes has a patch (thanks!) that is currently in the gate:
> > https://review.openstack.org/#/c/493972
> > Thanks to that work, we managed to find the problem #3.
> >
> >
> > Problem #3: from Ocata to Pike: all container images are
> > uploaded/specified, even for services not deployed
> > https://bugs.launchpad.net/tripleo/+bug/1710992
> > The CI jobs are timeouting during the upgrade process because
> > downloading + uploading _all_ containers in local cache takes more
> > than 20 minutes.
> > So this is where we are now, upgrade jobs timeout on that. Steve Baker
> > is currently looking at it but we'll probably offer some help.
> >
> >
> > Solutions:
> > - for stable/ocata: make upgrade jobs non-voting
> > - for pike: keep upgrade jobs non-voting and release without upgrade
> > testing
> >
> > Risks:
> > - for stable/ocata: it's highly possible to inject regression if jobs
> > aren't voting anymore.
> > - for pike: the quality of the release won't be good enough in term of
> > CI coverage comparing to Ocata.
> >
> > Mitigations:
> > - for stable/ocata: make jobs non-voting and enforce our
> > core-reviewers to pay double attention on what is landed. It should be
> > temporary until we manage to fix the CI jobs.
> > - for master: release RC1 without upgrade jobs and make progress
> > - Run TripleO upgrade scenarios as third party CI in RDO Cloud or
> > somewhere with resources and without timeout constraints.
> >
> > I would like some feedback on the proposal so we can move forward this
> > week,
> > Thanks.
> > --
> > Emilien Macchi
> >
> 
> I think due to some of the limitations with run times upstream we may need
> to rethink the workflow with upgrade tests upstream. It's not very clear to
> me what can be done with the multinode nodepool jobs outside of what is
> already being done.  I think we do have some choices with ovb jobs.   I'm
> not going to try and solve in this email but rethinking how we CI upgrades
> in the upstream infrastructure should be a focus for the Queens PTG.  We
> will need to focus on bringing run times significantly down as it's
> incredibly difficult to run two installs in 175 minutes across all the
> upstream cloud providers.
> 
Can you explain in more details where the bottlenecks are for the 175 mins?
That's just shy of 3 hours, and seems like more then enough time.

Not that it can be solved now, but maybe it is time to look at these jobs the
other way, how can we make them faster and what optimizations need to be made.

One example, we spend a lot of time in rebuilding RPM package with DLRN.  It is
possible in zuulv3 we'll be able to make changes to the CI workflow, so only 1
nodes builds a package, then all other jobs download new packages from that
node.

Another thing we can look at, is more parallel testing inplace of serial. I
can't point to anything specific, but would be helpful to sit down with sombody
to better understand all the back and forth between undercloud / overcloud /
multinodes / etc.

> Thanks Emilien for all the work you have done around upgrades!
> 
> 
> 
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [tripleo] scenario006 conflict

2017-08-16 Thread Derek Higgins
On 19 July 2017 at 17:02, Derek Higgins  wrote:
> On 17 July 2017 at 15:56, Derek Higgins  wrote:
>> On 17 July 2017 at 15:37, Emilien Macchi  wrote:
>>> On Thu, Jul 13, 2017 at 6:01 AM, Emilien Macchi  wrote:
 On Thu, Jul 13, 2017 at 1:55 AM, Derek Higgins  wrote:
> On 12 July 2017 at 22:33, Emilien Macchi  wrote:
>> On Wed, Jul 12, 2017 at 2:23 PM, Emilien Macchi  
>> wrote:
>> [...]
>>> Derek, it seems like you want to deploy Ironic on scenario006
>>> (https://review.openstack.org/#/c/474802). I was wondering how it
>>> would work with multinode jobs.
>>
>> Derek, I also would like to point out that
>> https://review.openstack.org/#/c/474802 is missing the environment
>> file for non-containerized deployments & and also the pingtest file.
>> Just for the record, if we can have it before the job moves in gate.
>
> I knew I had left out the ping test file, this is the next step but I
> can create a noop one for now if you'd like?

 Please create a basic pingtest with common things we have in other 
 scenarios.

> Is the non-containerized deployments a requirement?

 Until we stop supporting non-containerized deployments, I would say yes.

>>
>> Thanks,
>> --
>> Emilien Macchi

 So if you create a libvirt domain, would it be possible to do it on
 scenario004 for example and keep coverage for other services that are
 already on scenario004? It would avoid to consume a scenario just for
 Ironic. If not possible, then talk with Flavio and one of you will
 have to prepare scenario007 or 0008, depending where Numans is in his
 progress to have OVN coverage as well.
>>>
>>> I haven't seen much resolution / answers about it. We still have the
>>> conflict right now and open questions.
>>>
>>> Derek, Flavio - let's solve this one this week if we can.
>> Yes, I'll be looking into using scenario004 this week. I was traveling
>> last week so wasn't looking at it.
>
> I'm not sure if this is what you had intended but I believe to do
> this(i.e. test the nova ironic driver) we we'll
> need to swap out the nova libvirt driver for the ironic one. I think
> this is ok as the libvirt driver has coverage
> in other scenarios.
>
> Because there are no virtual BMC's setup yet on the controller I also
> have to remove the instance creation,
> but if merged I'll next work on adding these now. So I'm think
> something like this
> https://review.openstack.org/#/c/485261/

Quick update here, after talking to Emilien about this, I'll add to
this patch to set up VirtualBMC instances and not remove instance
creation. So it continues to test a ceph backed glance.

>
>>
>>>
>>> Thanks,
>>> --
>>> Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [designate] Skip meeting this week

2017-08-16 Thread Graham Hayes
Hi All,

I have just realised I am double booked for the meeting time this week.

We are in a quiet persiod, so I suggest skipping this weeks meeting.

Thanks,

Graham


0x23BA8E2E.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] critical situation with CI / upgrade jobs

2017-08-16 Thread Emilien Macchi
On Wed, Aug 16, 2017 at 3:17 AM, Bogdan Dobrelya  wrote:
> We could limit the upstream multinode jobs scope to only do upgrade
> testing of a couple of the services deployed, like keystone and nova and
> neutron, or so.

That would be a huge regression in our CI. Strong -2 on this idea.
We worked hard to have a pretty descent coverage during Ocata, we're
not going to give it up easily.
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] critical situation with CI / upgrade jobs

2017-08-16 Thread Emilien Macchi
On Wed, Aug 16, 2017 at 12:37 AM, Marios Andreou  wrote:
> For Newton to Ocata, is it consistent which clouds we are timing out on?

It's not consistent but the rate is very high:
http://cistatus.tripleo.org/

gate-tripleo-ci-centos-7-multinode-upgrades - 30 % of success this week
gate-tripleo-ci-centos-7-scenario001-multinode-upgrades - 13% of
success this week
gate-tripleo-ci-centos-7-scenario002-multinode-upgrades - 34% of
success this week
gate-tripleo-ci-centos-7-scenario003-multinode-upgrades - 78% of
success this week

(results on stable/ocata)

So as you can see results are not good at all for gate jobs.

> for master, +1 I think this is essentially what I am saying above for O...P
> - sounds like problem 2 is well in progress from weshay and the other
> container/image related problem 3 is the main outstanding item. Since RC1 is
> this week I think what you are proposing as mitigation is fair. So we
> re-evaluate making these jobs voting before the final RCs end of August

We might need to help him, and see how we can accelerate this work now.

> thanks for putting this together. I think if we really had to pick one the
> O..P ci has priority obviously this week (!)... I think the container/images
> related issues for O...P are both expected/teething issues from the huge
> amount of work done by the containerization team and can hopefully be
> resolved quickly.

I agree, priority is O..P for now - and getting these upgrade jobs working.
Note that the upgrade scenarios are not working correctly yet on
master, we'll need to figure that out as well. If you can maybe help
to have a look, that would be awesome.

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Suggestion about code review guide in microversion API

2017-08-16 Thread Takashi Natsume

Hi Nova Developers.

When the API microversion is bumped,
patches are required not only in nova side but also in python-novaclient  
side.


But, after the nova patches were merged,
the python-novaclient patches were sometimes not submitted for a while.

The patch in python-novaclient is necessary to submit a patch for a  
subsequent microversion.

For example, the patch for microversion 2.55 should be merged
after the patch for microversion 2.54 is merged.
There is a dependency between them.

So I'm proposing an amendment to the code review guide [1].
The code review guide should be changed as follows:

-
A new patch for the microversion API change in python-novaclient side  
should be submitted

*before the microversion change in Nova is merged*.
-

[1] https://review.openstack.org/#/c/494173/

Regards,
Takashi Natsume
NTT Software Innovation Center
E-mail: natsume.taka...@lab.ntt.co.jp


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage][mistral] Announcing Vitrage integration with Mistral

2017-08-16 Thread Afek, Ifat (Nokia - IL/Kfar Sava)
Hi all,

I’d like to announce the Vitrage integration with Mistral that was developed in 
Pike cycle. 

As part of this integration, the Vitrage user can specify in a Vitrage template 
that in case a certain condition is met a Mistral workflow should be executed. 
The condition can include a combination of alarms, resources, and root cause 
analysis information. 

This capability can be used, for example, to take corrective actions in case 
Vitrage detects a failure in the system. A more powerful use case could be to 
take different corrective actions based on the different root cause alarms, as 
identified by Vitrage.

You are welcome to start using this integration and suggest new use cases. More 
information can be found here[1][2].

[1] https://docs.openstack.org/vitrage/latest/contributor/mistral-config.html 
[2] 
https://docs.openstack.org/vitrage/latest/contributor/vitrage-template-format.html
 

Best Regards,
Ifat.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [watcher] cancelling weekly meeting

2017-08-16 Thread Alexander Chadin
Hi, watcher folks.

We will not have weekly meeting today because of my unforeseen circumstances.
Here is our agenda: 
https://wiki.openstack.org/wiki/Watcher_Meeting_Agenda#08.2F16.2F2017
Please, take a look at it and give your questions in reply if you have ones. 
I'll continue weekly meetings since next week.

Best regards
_
Alexander Chadin
OpenStack Developer
Servionica LTD
a.cha...@servionica.ru
+7 (916) 693-58-81
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] critical situation with CI / upgrade jobs

2017-08-16 Thread Bogdan Dobrelya
On 16.08.2017 3:33, Emilien Macchi wrote:
> So far, we're having 3 critical issues, that we all need to address as
> soon as we can.
> 
> Problem #1: Upgrade jobs timeout from Newton to Ocata
> https://bugs.launchpad.net/tripleo/+bug/1702955
> Today I spent an hour to look at it and here's what I've found so far:
> depending on which public cloud we're running the TripleO CI jobs, it
> timeouts or not.
> Here's an example of Heat resources that run in our CI:
> https://www.diffchecker.com/VTXkNFuk
> On the left, resources on a job that failed (running on internap) and
> on the right (running on citycloud) it worked.
> I've been through all upgrade steps and I haven't seen specific tasks
> that take more time here or here, but some little changes that make
> the big change at the end (so hard to debug).
> Note: both jobs use AFS mirrors.
> Help on that front would be very welcome.
> 
> 
> Problem #2: from Ocata to Pike (containerized) missing container upload step
> https://bugs.launchpad.net/tripleo/+bug/1710938
> Wes has a patch (thanks!) that is currently in the gate:
> https://review.openstack.org/#/c/493972
> Thanks to that work, we managed to find the problem #3.
> 
> 
> Problem #3: from Ocata to Pike: all container images are
> uploaded/specified, even for services not deployed
> https://bugs.launchpad.net/tripleo/+bug/1710992
> The CI jobs are timeouting during the upgrade process because
> downloading + uploading _all_ containers in local cache takes more
> than 20 minutes.
> So this is where we are now, upgrade jobs timeout on that. Steve Baker
> is currently looking at it but we'll probably offer some help.
> 
> 
> Solutions:
> - for stable/ocata: make upgrade jobs non-voting
> - for pike: keep upgrade jobs non-voting and release without upgrade testing

This doesn't look like a viable option to me. I'd prefer reduce the
scope (deployed services under upgrade testing) of the upgrade testing,
but release only having it passing for that scope.

> 
> Risks:
> - for stable/ocata: it's highly possible to inject regression if jobs
> aren't voting anymore.
> - for pike: the quality of the release won't be good enough in term of
> CI coverage comparing to Ocata.
> 
> Mitigations:
> - for stable/ocata: make jobs non-voting and enforce our
> core-reviewers to pay double attention on what is landed. It should be
> temporary until we manage to fix the CI jobs.
> - for master: release RC1 without upgrade jobs and make progress
> - Run TripleO upgrade scenarios as third party CI in RDO Cloud or
> somewhere with resources and without timeout constraints.
> 
> I would like some feedback on the proposal so we can move forward this week,
> Thanks.
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] critical situation with CI / upgrade jobs

2017-08-16 Thread Bogdan Dobrelya
On 16.08.2017 5:06, Wesley Hayutin wrote:
> 
> 
> On Tue, Aug 15, 2017 at 9:33 PM, Emilien Macchi  > wrote:
> 
> So far, we're having 3 critical issues, that we all need to address as
> soon as we can.
> 
> Problem #1: Upgrade jobs timeout from Newton to Ocata
> https://bugs.launchpad.net/tripleo/+bug/1702955
> 
> Today I spent an hour to look at it and here's what I've found so far:
> depending on which public cloud we're running the TripleO CI jobs, it
> timeouts or not.
> Here's an example of Heat resources that run in our CI:
> https://www.diffchecker.com/VTXkNFuk
> 
> On the left, resources on a job that failed (running on internap) and
> on the right (running on citycloud) it worked.
> I've been through all upgrade steps and I haven't seen specific tasks
> that take more time here or here, but some little changes that make
> the big change at the end (so hard to debug).
> Note: both jobs use AFS mirrors.
> Help on that front would be very welcome.
> 
> 
> Problem #2: from Ocata to Pike (containerized) missing container
> upload step
> https://bugs.launchpad.net/tripleo/+bug/1710938
> 
> Wes has a patch (thanks!) that is currently in the gate:
> https://review.openstack.org/#/c/493972
> 
> Thanks to that work, we managed to find the problem #3.
> 
> 
> Problem #3: from Ocata to Pike: all container images are
> uploaded/specified, even for services not deployed
> https://bugs.launchpad.net/tripleo/+bug/1710992
> 
> The CI jobs are timeouting during the upgrade process because
> downloading + uploading _all_ containers in local cache takes more
> than 20 minutes.
> So this is where we are now, upgrade jobs timeout on that. Steve Baker
> is currently looking at it but we'll probably offer some help.
> 
> 
> Solutions:
> - for stable/ocata: make upgrade jobs non-voting
> - for pike: keep upgrade jobs non-voting and release without upgrade
> testing
> 
> Risks:
> - for stable/ocata: it's highly possible to inject regression if jobs
> aren't voting anymore.
> - for pike: the quality of the release won't be good enough in term of
> CI coverage comparing to Ocata.
> 
> Mitigations:
> - for stable/ocata: make jobs non-voting and enforce our
> core-reviewers to pay double attention on what is landed. It should be
> temporary until we manage to fix the CI jobs.
> - for master: release RC1 without upgrade jobs and make progress
> - Run TripleO upgrade scenarios as third party CI in RDO Cloud or
> somewhere with resources and without timeout constraints.
> 
> I would like some feedback on the proposal so we can move forward
> this week,
> Thanks.
> --
> Emilien Macchi
> 
> 
> I think due to some of the limitations with run times upstream we may
> need to rethink the workflow with upgrade tests upstream. It's not very
> clear to me what can be done with the multinode nodepool jobs outside of
> what is already being done.  I think we do have some choices with ovb

We could limit the upstream multinode jobs scope to only do upgrade
testing of a couple of the services deployed, like keystone and nova and
neutron, or so.

> jobs.   I'm not going to try and solve in this email but rethinking how
> we CI upgrades in the upstream infrastructure should be a focus for the
> Queens PTG.  We will need to focus on bringing run times significantly
> down as it's incredibly difficult to run two installs in 175 minutes
> across all the upstream cloud providers.
> 
> Thanks Emilien for all the work you have done around upgrades!
> 
>  
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

[openstack-dev] [vitrage][ptl] PTL on vacation

2017-08-16 Thread Afek, Ifat (Nokia - IL/Kfar Sava)
Hi,

I’m going to be on vacation between August 17th and 26th.
Idan Hefetz (idan_hefetz on IRC) will replace me while I’m away and will manage 
the Vitrage release.

Thanks,
Ifat.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [FEMDC] Meeting Cancelled - Next IRC meeting August, the 30th

2017-08-16 Thread lebre . adrien
Dear All, 


The today IRC meeting is cancelled (due to the holidays period, most of us are 
unavailable). 
Next meeting will be held on August, the 30th (agenda: TBD on the etherpad as 
usual)

Last but not the least, If you are interested by FEMDC's challenges and you are 
located close to SF, do not hesitate to attend the opendev event next 
September: http://www.opendevconf.com/schedule/


Best, 
Ad_rien_

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Secrets of edit-constraints

2017-08-16 Thread Tony Breeds
On Mon, Aug 14, 2017 at 08:36:33AM +, Csatari, Gergely (Nokia - 
HU/Budapest) wrote:
> Hi,
> 
> I have an interesting situation with the parametrization of edit-constraints 
> in tools/tox_install.sh. This happens at the moment in neutron-lib, but as 
> amotoki pointed out in [1] the same should happen in any projects (and 
> actually was happening with me in Vitrage and Mistral).
> 
> Here is what I experience:
> With the current parameters of edit-constraints (edit-constraints $localfile 
> -- $LIB_NAME "-e file://$PWD#egg=$LIB_NAME") the library itself (neutron-lib 
> in this case) is added to upper-constraints.txt and the installation fails 
> with "Could not satisfy constraints for 'neutron-lib': installation from path 
> or url cannot be constrained to a version".
> If I modify the parameters of edit-constraints in a way that it removes the 
> library (neutron-lib in this case) instead of adding (edit-constraints 
> $localfile $LIB_NAME --) it my build succeeds (as I'm playing with api-ref I 
> use tox -r -e api-ref, but the same also happens with tox -r -e pep8).
> 
> Is this happening with only me?

No using edit-constraints to remove an item from the constrained set so
you can use the current developement (git SHA) is the right things to
do.

Many of the project in the scenario you're describing do just that.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] l2gw

2017-08-16 Thread Lajos Katona

Hi,

We faced an issue with l2-gw-update, which means that actually if there 
are connections for a gw the update will throw an exception 
(L2GatewayInUse), and the update is only possible after deleting first 
the connections, do the update and add the connections back.


It is not exactly clear why this restriction is there in the code (at 
least I can't find it in docs or comments in the code, or review).

As I see the check for network connections was introduced in this patch:
https://review.openstack.org/#/c/144097 
(https://review.openstack.org/#/c/144097/21..22/networking_l2gw/db/l2gateway/l2gateway_db.py)


Could you please give me a little background why the update operation is 
not allowed on an l2gw with network connections?


Thanks in advance for the help.

Regards
Lajos

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [publiccloud-wg] Reminder meeting PublicCloudWorkingGroup

2017-08-16 Thread Tobias Rydberg

Hi everyone,

Don't forget todays meeting for the PublicCloudWorkingGroup.
1400 UTC  in IRC channel #openstack-meeting-3

Etherpad: https://etherpad.openstack.org/p/publiccloud-wg

Regards,
Tobias Rydberg


smime.p7s
Description: S/MIME Cryptographic Signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] QoS meeting cancelled, next meeting 29/08/2017

2017-08-16 Thread Alonso Hernandez, Rodolfo
Hello:

Yesterday by mistake I didn't send an email saying the QoS meeting was 
cancelled. During this weeks, our focus are in the RC and solving possible bugs.

Next meeting (no excuses) will be on 29/08/2017.

Regards.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] critical situation with CI / upgrade jobs

2017-08-16 Thread Marios Andreou
On Wed, Aug 16, 2017 at 4:33 AM, Emilien Macchi  wrote:

> So far, we're having 3 critical issues, that we all need to address as
> soon as we can.
>
> Problem #1: Upgrade jobs timeout from Newton to Ocata
> https://bugs.launchpad.net/tripleo/+bug/1702955
> Today I spent an hour to look at it and here's what I've found so far:
> depending on which public cloud we're running the TripleO CI jobs, it
> timeouts or not.
> Here's an example of Heat resources that run in our CI:
> https://www.diffchecker.com/VTXkNFuk
> On the left, resources on a job that failed (running on internap) and
> on the right (running on citycloud) it worked.
> I've been through all upgrade steps and I haven't seen specific tasks
> that take more time here or here, but some little changes that make
> the big change at the end (so hard to debug).
> Note: both jobs use AFS mirrors.
> Help on that front would be very welcome.
>
>
> Problem #2: from Ocata to Pike (containerized) missing container upload
> step
> https://bugs.launchpad.net/tripleo/+bug/1710938
> Wes has a patch (thanks!) that is currently in the gate:
> https://review.openstack.org/#/c/493972
> Thanks to that work, we managed to find the problem #3.
>
>
> Problem #3: from Ocata to Pike: all container images are
> uploaded/specified, even for services not deployed
> https://bugs.launchpad.net/tripleo/+bug/1710992
> The CI jobs are timeouting during the upgrade process because
> downloading + uploading _all_ containers in local cache takes more
> than 20 minutes.
> So this is where we are now, upgrade jobs timeout on that. Steve Baker
> is currently looking at it but we'll probably offer some help.
>
>
> Solutions:
> - for stable/ocata: make upgrade jobs non-voting
> - for pike: keep upgrade jobs non-voting and release without upgrade
> testing
>
>
+1 but for Ocata to Pike, sounds like the container/images related problems
2 and 3 above are both in progress or being looked at (weshay/sbaker ++) in
which case we might be able to fix O...P jobs at least?

For Newton to Ocata, is it consistent which clouds we are timing out on? I
've looked at that https://bugs.launchpad.net/tripleo/+bug/1702955 before
and I know other folks from upgrades have too, but couldn't find some root
cause, or any upgrades operations taking too long/timing out/error etc. If
it is consistent which clouds time out we can use that info to guide us in
the case that we make the jobs non-voting for N...O (e.g. a known list of
'timing out clouds' to decide if we should inspect the ci logs closer
before merging some patch). Obviously only until/unless we actually root
cause that one (I will also find some time to check again)



> Risks:
> - for stable/ocata: it's highly possible to inject regression if jobs
> aren't voting anymore.
> - for pike: the quality of the release won't be good enough in term of
> CI coverage comparing to Ocata.
>
> Mitigations:
> - for stable/ocata: make jobs non-voting and enforce our
> core-reviewers to pay double attention on what is landed. It should be
> temporary until we manage to fix the CI jobs.
> - for master: release RC1 without upgrade jobs and make progress
>

for master, +1 I think this is essentially what I am saying above for O...P
- sounds like problem 2 is well in progress from weshay and the other
container/image related problem 3 is the main outstanding item. Since RC1
is this week I think what you are proposing as mitigation is fair. So we
re-evaluate making these jobs voting before the final RCs end of August


> - Run TripleO upgrade scenarios as third party CI in RDO Cloud or
> somewhere with resources and without timeout constraints.


> I would like some feedback on the proposal so we can move forward this
> week,
> Thanks.
>


thanks for putting this together. I think if we really had to pick one the
O..P ci has priority obviously this week (!)... I think the
container/images related issues for O...P are both expected/teething issues
from the huge amount of work done by the containerization team and can
hopefully be resolved quickly.

marios



> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Senlin] Proposing Liyi as core reviewer

2017-08-16 Thread x Lyn
+1 !
He is a qualified contributor!

> On Aug 16, 2017, at 12:50, Qiming Teng  wrote:
> 
> Dear Senlin cores,
> 
> As you might have witnessed, Liyi has been a solid contributor to Senlin
> during Pike cycle. His patches are of high quality, showing that he has
> a good grasp of the design and the code. Based on the contribution
> record and his agreement to work with the team more closely, I'm
> proposing adding him to the core team.
> 
> Please reply this email if there are concerns or comments.
> 
> Happy coding!
> 
> - Qiming
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] 答复: [Senlin] Proposing Liyi as core reviewer

2017-08-16 Thread liu.xuefeng1
+1

Weclome, liyi. 






原始邮件



发件人: 
收件人: 
日 期 :2017年08月16日 12:57
主 题 :[openstack-dev] [Senlin] Proposing Liyi as core reviewer





Dear Senlin cores,

As you might have witnessed, Liyi has been a solid contributor to Senlin
during Pike cycle. His patches are of high quality, showing that he has
a good grasp of the design and the code. Based on the contribution
record and his agreement to work with the team more closely, I'm
proposing adding him to the core team.

Please reply this email if there are concerns or comments.

Happy coding!

- Qiming


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev