[openstack-dev] [Designate] Plan for OSM

2018-04-25 Thread da...@vn.fujitsu.com
Hi forks,

We tested and completed our process with OVO migration in Queens cycle.
Now, we can continue with OSM implementation for Designate.
Actually, we have pushed some patches related to OSM[1] and it's ready to 
review.

Please take a look on these patches.

[1] 
https://review.openstack.org/#/q/project:openstack/designate+status:open++Trigger-less
 

Thanks and Best regards,

Dang Van Dai (Mr.)
Fujitsu Vietnam Ltd.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][horizon][neutron] plugins depending on services

2018-04-25 Thread Shu M.
Hi folks,

> unwinding things
> 
>
> There are a few different options, but it's important to keep in mind
> that we ultimately want all of the following:
>
> * The code works
> * Tests can run properly in CI
> * "Depends-On" works in CI so that you can test changes cross-repo
> * Tests can run properly locally for developers
> * Deployment requirements are accurately communicated to deployers

One more thing:
* Test environments between CI and local should be as same as possible.

To run tests in CI and local successfully, I have tried to add new testenv
for local into tox.ini (https://review.openstack.org/#/c/564099/4/tox.ini)
as alternative solution last night, this would be same as adding new
requirements.txt for local check. This seems to run fine, but this might
make difference in environments between CI and local. So I can not conclude
this is the best way for now.

>From view of horizon plugin developer, the one of issue on horizon and
plugins is the feature gap due to unsufficient communication.
Merging developing repository may give good effects for this issue, if
horizon can separate each panels into plugins.

Thanks,
Shu Muto
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [masakari] Masakari Project Meeting time

2018-04-25 Thread Bhor, Dinesh
+1
This time may not fit for attendees who work in IST time zone as it will 07.30 
AM in the morning.

Thanks,
Dinesh

> On Apr 26, 2018, at 12:06 AM, Kwan, Louie  wrote:
>
> Sampath, Dinesh and others,
>
> It was a good meeting last week.
>
> As briefly discussed with Sampath, I would like to check whether we can 
> adjust the meeting time.
>
> We are at EST time zone, the meeting is right on our midnight time, 12:00 am.
>
> It will be nice if the meeting can be started ~2 hours earlier e.g. Could it 
> be  started at 02:00: UTC instead?
>
> Thanks.
> Louie
>
>

Disclaimer: This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged,confidential, 
and proprietary data. If you are not the intended recipient,please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [masakari] Masakari Project Meeting time

2018-04-25 Thread Patil, Tushar
>> We are at EST time zone, the meeting is right on our midnight time, 12:00 am.
>> It will be nice if the meeting can be started ~2 hours earlier e.g. Could it 
>> be  started at 02:00: UTC instead?
+1


Regards,
Tushar Patil


From: Kwan, Louie 
Sent: Thursday, April 26, 2018 12:06:00 AM
To: Sampath Priyankara (samP); openstack-dev@lists.openstack.org
Subject: [openstack-dev]  [masakari] Masakari Project Meeting time

Sampath, Dinesh and others,

It was a good meeting last week.

As briefly discussed with Sampath, I would like to check whether we can adjust 
the meeting time.

We are at EST time zone, the meeting is right on our midnight time, 12:00 am.

It will be nice if the meeting can be started ~2 hours earlier e.g. Could it be 
 started at 02:00: UTC instead?

Thanks.
Louie



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Disclaimer: This email and any attachments are sent in strictest confidence for 
the sole use of the addressee and may contain legally privileged,confidential, 
and proprietary data. If you are not the intended recipient,please advise the 
sender by replying promptly to this email and then delete and destroy this 
email and any attachments without any further use, copying or forwarding.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra] How to take over a project?

2018-04-25 Thread Sangho Shin
Miguel and Ihar,

Thank you so much for your help.
For now, I just allowed onos-core team to create references, which should allow 
me to create a new stable branch, I believe. 
I am waiting the config change is being merged. :-)

Thank you,

Sangho



> On 25 Apr 2018, at 11:46 PM, Ihar Hrachyshka  wrote:
> 
> ONOS is not part of Neutron and hence Neutron Release team should not
> be involved in its matters. If gerrit ACLs say otherwise, you should
> fix the ACLs.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] final stages of python 3 transition

2018-04-25 Thread Jeremy Stanley
On 2018-04-25 18:25:24 -0400 (-0400), Doug Hellmann wrote:
[...]
> Does 18.04 include a python 2 option?

Yes, 2.7.15 if packages.ubuntu.com is to be believed.

> What is the target for completing the changeover? The first or
> second milestone for Stein, or the end of the cycle?

I would expect us to want to pull the trigger after whatever grace
period the cycle-trailing projects are comfortable with (but
certainly before the first milestone I would think?).

> It would be useful to have some input from the project teams who
> have no unit or functional test jobs running for 3.5, since they
> will have the most work to do to cope with the upgrade overall.

Yes, it would in my opinion anyway.

> Who is coordinating Ubuntu upgrade work and setting up the
> experimental jobs?

We have preliminary ubuntu-bionic images available already
(officially it doesn't release until tomorrow), and some teams have
started to use them for experimental or non-voting jobs:

http://codesearch.openstack.org/?q=ubuntu-bionic

-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] final stages of python 3 transition

2018-04-25 Thread Clark Boylan
On Wed, Apr 25, 2018, at 3:25 PM, Doug Hellmann wrote:
> Excerpts from Jeremy Stanley's message of 2018-04-25 21:40:37 +:
> > On 2018-04-25 16:54:46 -0400 (-0400), Doug Hellmann wrote:
> > [...]
> > > Still, we need to press on to the next phase of the migration, which
> > > I have been calling "Python 3 first". This is where we use python
> > > 3 as the default, for everything, and set up the exceptions we need
> > > for anything that still requires python 2.
> > [...]
> > 
> > It may be worth considering how this interacts with the switch of
> > our default test platform from Ubuntu 16.04 (which provides Python
> > 3.5) to 18.04 (which provides Python 3.6). If we switch from 3.5 to
> > 3.6 before we change most remaining jobs over to Python 3.x versions
> > then it gives us a chance to spot differences between 3.5 and 3.6 at
> > that point. Given that the 14.04 to 16.04 migration, where we
> > attempted to allow projects to switch at their own pace, didn't go
> > so well we're hoping to do a "big bang" migration instead for 18.04
> > and expect teams who haven't set up experimental jobs ahead of time
> > to work out remaining blockers after the flag day before they can go
> > back to business as usual. Since the 18.04 release is happening so
> > far into the Rocky cycle, we're likely to want to do that at the
> > start of Stein instead when it will be less disruptive.
> > 
> > So I guess that raises the question: switch to Python 3.5 by default
> > for most jobs in Rocky and then have a potentially more disruptive
> > default platform switch with Python 3.5->3.6 at the beginning of
> > Stein, or wait until the default platform switch to move from Python
> > 2.7 to 3.6 as the job default? I can see some value in each option.
> 
> Does 18.04 include a python 2 option?

It does, https://packages.ubuntu.com/bionic/python2.7.

> 
> What is the target for completing the changeover? The first or
> second milestone for Stein, or the end of the cycle?

Previously we've tried to do the transition in OpenStack release that is under 
development when the LTS releases. However we've offset things a bit now so 
that may not be as feasible. I would expect that if we waited for the next 
cycle we would do it very early in that cycle.

For the transition from python 3.5 on Xenial to 3.6 on Bionic we may want to 
keep the python 3.5 jobs on Xenial but add in non voting python 3.6 jobs to 
every project running Xenial python3.5 jobs. Then those projects can toggle 
them to voting 3.6 jobs if/when they start working. Then we can decide at a 
later time if continuing to support python 3.5 (and testing it) is worthwhile.

> 
> It would be useful to have some input from the project teams who
> have no unit or functional test jobs running for 3.5, since they
> will have the most work to do to cope with the upgrade overall.
> 
> Who is coordinating Ubuntu upgrade work and setting up the experimental
> jobs?

Paul Belanger has been doing much of the work to get the images up and running 
and helping some projects start to run early jobs on the beta images. I expect 
Paul would want to continue to carry the transition through to the end.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] final stages of python 3 transition

2018-04-25 Thread Doug Hellmann
Excerpts from Jeremy Stanley's message of 2018-04-25 21:40:37 +:
> On 2018-04-25 16:54:46 -0400 (-0400), Doug Hellmann wrote:
> [...]
> > Still, we need to press on to the next phase of the migration, which
> > I have been calling "Python 3 first". This is where we use python
> > 3 as the default, for everything, and set up the exceptions we need
> > for anything that still requires python 2.
> [...]
> 
> It may be worth considering how this interacts with the switch of
> our default test platform from Ubuntu 16.04 (which provides Python
> 3.5) to 18.04 (which provides Python 3.6). If we switch from 3.5 to
> 3.6 before we change most remaining jobs over to Python 3.x versions
> then it gives us a chance to spot differences between 3.5 and 3.6 at
> that point. Given that the 14.04 to 16.04 migration, where we
> attempted to allow projects to switch at their own pace, didn't go
> so well we're hoping to do a "big bang" migration instead for 18.04
> and expect teams who haven't set up experimental jobs ahead of time
> to work out remaining blockers after the flag day before they can go
> back to business as usual. Since the 18.04 release is happening so
> far into the Rocky cycle, we're likely to want to do that at the
> start of Stein instead when it will be less disruptive.
> 
> So I guess that raises the question: switch to Python 3.5 by default
> for most jobs in Rocky and then have a potentially more disruptive
> default platform switch with Python 3.5->3.6 at the beginning of
> Stein, or wait until the default platform switch to move from Python
> 2.7 to 3.6 as the job default? I can see some value in each option.

Does 18.04 include a python 2 option?

What is the target for completing the changeover? The first or
second milestone for Stein, or the end of the cycle?

It would be useful to have some input from the project teams who
have no unit or functional test jobs running for 3.5, since they
will have the most work to do to cope with the upgrade overall.

Who is coordinating Ubuntu upgrade work and setting up the experimental
jobs?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] final stages of python 3 transition

2018-04-25 Thread Jeremy Stanley
On 2018-04-25 16:54:46 -0400 (-0400), Doug Hellmann wrote:
[...]
> Still, we need to press on to the next phase of the migration, which
> I have been calling "Python 3 first". This is where we use python
> 3 as the default, for everything, and set up the exceptions we need
> for anything that still requires python 2.
[...]

It may be worth considering how this interacts with the switch of
our default test platform from Ubuntu 16.04 (which provides Python
3.5) to 18.04 (which provides Python 3.6). If we switch from 3.5 to
3.6 before we change most remaining jobs over to Python 3.x versions
then it gives us a chance to spot differences between 3.5 and 3.6 at
that point. Given that the 14.04 to 16.04 migration, where we
attempted to allow projects to switch at their own pace, didn't go
so well we're hoping to do a "big bang" migration instead for 18.04
and expect teams who haven't set up experimental jobs ahead of time
to work out remaining blockers after the flag day before they can go
back to business as usual. Since the 18.04 release is happening so
far into the Rocky cycle, we're likely to want to do that at the
start of Stein instead when it will be less disruptive.

So I guess that raises the question: switch to Python 3.5 by default
for most jobs in Rocky and then have a potentially more disruptive
default platform switch with Python 3.5->3.6 at the beginning of
Stein, or wait until the default platform switch to move from Python
2.7 to 3.6 as the job default? I can see some value in each option.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Only a Few Hours Left Until Prices Increase - OpenStack Summit Vancouver

2018-04-25 Thread Kendall Waters
Hi everyone,

Friendly reminder that prices for the OpenStack Summit Vancouver 
 will be increasing TONIGHT at 
11:59pm PT (April 26, 6:59 UTC). 

Register NOW  before the 
price increases!

Also, if you haven’t booked your hotel yet, we still have a limited number of 
reduced rate hotel rooms available here 
.

If you have any Summit-related questions, please contact sum...@openstack.org 
.

Cheers,
Kendall

Kendall Waters
OpenStack Marketing
kend...@openstack.org __
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Summit Forum Schedule

2018-04-25 Thread Jimmy McArthur

Hi everyone -

Please have a look at the Vancouver Forum schedule: 
https://docs.google.com/spreadsheets/d/15hkU0FLJ7yqougCiCQgprwTy867CGnmUvAvKaBjlDAo/edit?usp=sharing 
(also attached as a CSV) The proposed schedule was put together by two 
members from UC, TC and Foundation.


We do our best to avoid moving scheduled items around as it tends to 
create a domino affect, but we do realize we might have missed 
something.  The schedule should generally be set, but if you see a major 
conflict in either content or speaker availability, please email 
speakersupp...@openstack.org.


Thanks all,
Jimmy
Day,Time,"Room1 (Community, Ops, Infra)",Room2 (Feature/Future Discussion),Room3
Mon,11:35-12:15,Forum 101,"Manila Ops feedback: running at scale, barriers to deployment",
,1:30-2:10,Users / Operators adoption of QA tools / plugins,Standalone Cinder Introduction,Continuing the Migration: Launchpad -> StoryBoard
,2:20-3:00,Python Testing Interface,Requirements for Resource Reservation,A New Face and Place for the OpenStack Mentoring Program
,3:10-3:50,Machine Learning for CI Results Analysis,Building the path to extracting Placement from Nova,First Contact SIG Operator Inclusion
,4:20-5:00,SDK Testing and Certification,Planning to use Placement in Cinder,Ops/Devs: One community
,5:10-5:50,Python 2 Deprecation Timeline,Cyborg/FPGA Support for Cloud/NFV,Ops Meetups Team: catch-up and PTG merger discussion
Tue,9:00-9:40,Users & Ops feedback for Heat,Image handling in an edge cloud infrastructure,
,9:50-10:30,User and Ops Feedback session for OpenStack-Ansible,,
,11:00-11:40,Users & Ops feedback for Monasca (Monitoring & Logging aaS),Cinder High Availability (HA) Discussion,
,11:50-12:30,Ironic Ops and User feedback,Multi-attach introduction and future direction,
,1:50-2:30,Upgrading OpenStack - war stories,Update on the gaps identified by ETSI NFV in OpenStack,
,2:40-3:20,,Possible edge architectures for Keystone,
,3:30-4:10,nova/neutron + ops cross-project session,Supporting General Federation for Large-Scale Collaborations,
,4:40-5:20,CellsV2 migration process sync with operators,Fault Management/Monitoring for NFV/Edge/5G/IoT,
,5:30-6:10,Keystone Feedback Session,,
Wed,9:00-9:40,Fast Forward Upgrades Part I - Current State,Pre-emptible instances - the way forward,
,9:50-10:30,Fast Forward Upgrades Part II - Future Plans and WIP,Cinder's Documentation Discussion,
,11:00-11:40,"Interoperable Image Import Feedback, experiences, what next",Making NFV features easier to use,
,11:50-12:30,Kubernetes Cloud Provider OpenStack Feature Planning,DPDK/SR-IOV NFV Operational issues and way forward,
,1:50-2:30,Designate Ops / User / Distro Feedback session,Private Enterprise Cloud issues,
,2:40-3:20,Rolling maintenance and upgrade in interaction with VNFM,Missing features in OpenStack for public clouds,
,3:30-4:10,TripleO Ops and User feedback,OpenStack Passport Program - feedback and next step,
,4:40-5:20,Glance Operators'/Users Feedback,Drafting Requirements for Organisations Contributing to Open,
,5:30-6:10,OpenStack Operators Community documentation,TripleO and Ansible integration,
Thu,9:00-9:40,"Extended Maintenance part I: past, present and future",Neutron project handling of its RYU dependency,
,9:50-10:30,Extended Maintenance part II: EM and release cycles,[neutron] extend-logging-api,
,11:00-11:40,"OpenStack is ""mature"" -- time to get serious on Maintainers",[neutron-fwaas] firewall l7 fitlering,
,11:50-12:30,S Release goals,Image Lifecycle Management,
,1:50-2:30,"Official projects and the boundary of ""what is OpenStack""",Default Roles,
,2:40-3:20,TC Retrospective,Unified Limits,
,3:30-4:10,Cross-community governance between OSF projects,Advanced RCA use cases - taking Vitrage to the next level,
,4:40-5:20,Adjutant official project status,Vitrage RCA over K8s. Pets and Cattle - Monitor each cow?,
,5:30-6:10,,,__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][tc] final stages of python 3 transition

2018-04-25 Thread Doug Hellmann
It's time to talk about the next steps in our migration from python
2 to python 3.

Up to this point we have mostly focused on reaching a state where
we support both versions of the language. We are not quite there
with all projects, as you can see by reviewing the test coverage
status information at
https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects

Still, we need to press on to the next phase of the migration, which
I have been calling "Python 3 first". This is where we use python
3 as the default, for everything, and set up the exceptions we need
for anything that still requires python 2.

To reach that stage, we need to:

1. Change the documentation and release notes jobs to use python 3.
   (The Oslo team recently completed this, and found that we did
   need to make a few small code changes to get them to work.)
2. Change (or duplicate) all functional test jobs to run under
   python 3.
3. Change the packaging jobs to use python 3.
4. Update devstack to use 3 by default and require setting a flag to
   use 2. (This may trigger other job changes.)

At that point, all of our deliverables will be produced using python
3, and we can be relatively confident that if we no longer had
access to python 2 we could still continue operating. We could also
start updating deployment tools to use either python 3 or 2, so
that users could actually deploy using the python 3 versions of
services.

Somewhere in that time frame our third-party CI systems will need
to ensure they have python 3 support as well.

After the "Python 3 first" phase is completed we should release
one series using the packages built with python 3. Perhaps Stein?
Or is that too ambitious?

Next, we will be ready to address the prerequisites for "Python 3
only," which will allow us to drop Python 2 support.

We need to wait to drop python 2 support as a community, rather
than going one project at a time, to avoid doubling the work of
downstream consumers such as distros and independent deployers. We
don't want them to have to package all (or even a large number) of
the dependencies of OpenStack twice because they have to install
some services running under python 2 and others under 3. Ideally
they would be able to upgrade all of the services on a node together
as part of their transition to the new version, without ending up
with a python 2 version of a dependency along side a python 3 version
of the same package.

The remaining items could be fixed earlier, but this is the point
at which they would block us:

1. Fix oslo.service functional tests -- the Oslo team needs help
   maintaining this library. Alternatively, we could move all
   services to use cotyledon (https://pypi.org/project/cotyledon/).

2. Finish the unit test and functional test ports so that all of
   our tests can run under python 3 (this implies that the services
   all run under python 3, so there is no more porting to do).

Finally, after we have *all* tests running on python 3, we can
safely drop python 2.

We have previously discussed the end of the T cycle as the point
at which we would have all of those tests running, and if that holds
true we could reasonably drop python 2 during the beginning of the
U cycle, in late 2019 and before the 2020 cut-off point when upstream
python 2 support will be dropped.

I need some info from the deployment tool teams to understand whether
they would be ready to take the plunge during T or U and start
deploying only the python 3 version. Are there other upgrade issues
that need to be addressed to support moving from 2 to 3? Something
that might be part of the platform(s), rather than OpenStack itself?

What else have I missed in these phases? Other jobs? Other blocking
conditions?

Doug


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier?

2018-04-25 Thread Doug Hellmann
Excerpts from Adam Spiers's message of 2018-04-25 18:15:42 +0100:
> [BTW I hope it's not considered off-bounds for those of us who aren't
> TC election candidates to reply within these campaign question threads
> to responses from the candidates - but if so, let me know and I'll
> shut up ;-) ]

Everyone should feel free to participate!

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][horizon][neutron] plugins depending on services

2018-04-25 Thread Ivan Kolodyazhny
Hi team,

I'm speaking mostly from Horizon side, but it should be pretty the same for
others.

We started a discussion today at the Horizon's meeting but we don't have
any decision now.

For the current release cycle, it looks reasonable to test plugins over the
latest master on gates. We're thinking to introduce horizon-lib but we need
further discussions on it.

Horizon follows stable policy and we try to do our best to not break any
existing plugin. Unfortunately, due to some cross-projects
miscommunications, there were some issues with plugins this cycle. I'm
ready to work with plugins team to fix these issues asap.

To prevent such issues in the future, I think it would be good to have
cross-project jobs on Horizon's gates too. We should run at least plugins
unit-tests against Horizon's proposed patch to make sure that we don't
break anything in plugins. E.g. if we drop some deprecated feature or make
an incompatible change, such job will notify us that we need to update
a plugin first before merging patch to Horizon. I don't know what is the
best way to implement such jobs so it would be good to get some inputs and
help from Infra team here.



Regards,
Ivan Kolodyazhny,
http://blog.e0ne.info/

On Wed, Apr 25, 2018 at 7:40 PM, Jeremy Stanley  wrote:

> On 2018-04-25 12:03:54 -0400 (-0400), Doug Hellmann wrote:
> [...]
> > Especially now with lower-constraints jobs in place, having plugins
> > rely on features only available in unreleased versions of service
> > projects doesn't make a lot of sense. We test that way *between*
> > services using integration tests that use the REST APIs, but we
> > also have some pretty strong stability requirements in place for
> > those APIs.
> [...]
>
> This came up again a few days ago for sahara-dashboard. We talked
> through some obvious alternatives to keep its master branch from
> depending on an unreleased state of horizon and the situation today
> is that plugin developers have been relying on developing their
> releases in parallel with the services. Not merging an entire
> development cycle's worth of work until release day (whether that's
> by way of a feature branch or by just continually rebasing and
> stacking in Gerrit) would be a very painful workflow for them, and
> having to wait a full release cycle before they could start
> integrating support for new features in the service would be equally
> unfortunate.
>
> As for merging the plugin and service repositories, they tend to be
> developed by completely disparate teams so that could require a fair
> amount of political work to solve. Extracting the plugin interface
> into a separate library which releases more frequently than the
> service does indeed sound like the sanest option, but will also
> probably take quite a while for some teams to achieve (I gather
> neutron-lib is getting there, but I haven't heard about any work
> toward that end in Horizon yet).
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures][release][reno] Release of openstack/reno failed

2018-04-25 Thread Doug Hellmann
Excerpts from Doug Hellmann's message of 2018-04-25 15:06:37 -0400:
> Excerpts from zuul's message of 2018-04-25 18:04:07 +:
> > Build failed.
> > 
> > - release-openstack-python 
> > http://logs.openstack.org/25/25ef4b82d1d21f6e0ab442405eeb8b12e2024fb1/release/release-openstack-python/d9d8142/
> >  : SUCCESS in 4m 00s
> > - announce-release 
> > http://logs.openstack.org/25/25ef4b82d1d21f6e0ab442405eeb8b12e2024fb1/release/announce-release/cf78acd/
> >  : FAILURE in 2m 52s
> > - propose-update-constraints 
> > http://logs.openstack.org/25/25ef4b82d1d21f6e0ab442405eeb8b12e2024fb1/release/propose-update-constraints/5cb80ad/
> >  : SUCCESS in 2m 35s
> > 
> 
> I believe https://review.openstack.org/564317 addresses the failure in
> the announce script from the log above.
> 
> Doug

See
http://lists.openstack.org/pipermail/release-announce/2018-April/004980.html
for the output using that version of the script.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures][release][reno] Release of openstack/reno failed

2018-04-25 Thread Doug Hellmann
Excerpts from zuul's message of 2018-04-25 18:04:07 +:
> Build failed.
> 
> - release-openstack-python 
> http://logs.openstack.org/25/25ef4b82d1d21f6e0ab442405eeb8b12e2024fb1/release/release-openstack-python/d9d8142/
>  : SUCCESS in 4m 00s
> - announce-release 
> http://logs.openstack.org/25/25ef4b82d1d21f6e0ab442405eeb8b12e2024fb1/release/announce-release/cf78acd/
>  : FAILURE in 2m 52s
> - propose-update-constraints 
> http://logs.openstack.org/25/25ef4b82d1d21f6e0ab442405eeb8b12e2024fb1/release/propose-update-constraints/5cb80ad/
>  : SUCCESS in 2m 35s
> 

I believe https://review.openstack.org/564317 addresses the failure in
the announce script from the log above.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] Bugs deputy duty calendar

2018-04-25 Thread Miguel Lavalle
Dear Neutrinos,

I just rolled over the bugs deputy duty calendar. Please take a look and
take note of your next duty week:
https://wiki.openstack.org/wiki/Network/Meetings#Bug_deputy

Best regards

Miguel
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier?

2018-04-25 Thread Rico Lin
2018-04-25 22:13 GMT+08:00 Jeremy Stanley :
>
> On 2018-04-25 14:12:00 +0800 (+0800), Rico Lin wrote:
> [...]
> > I believe to combine API services into one service will be able to
> > scale much easier. As we already starting from providing multiple
> > services and binding with Apache(Also concern about Zane's
> > comment), we can start this goal by saying providing unified API
> > service architecture (or start with new oslo api service). If we
> > reduce the difference between implementation from API service in
> > each OpenStack services first, maybe will make it easier to manage
> > or upgrade (since we unfied the package requirements) and even
> > possible to accelerate APIs.
> [...]
>
> How do you see this as being either similar to or different from the
> https://git.openstack.org/cgit/openstack/oaktree/tree/README.rst
> effort which is currently underway?

I think it's different from oaktree, since oaktree is an upper layer which
depends on API Services (allow shade to connected with), And what I'm
saying is to unify all API Servers. An example will be like what tempest do
for tests, tempest provide cmd and tools to help you generate and run test
cases, each service only required to provide a plugin. So if first step (to
unified) is complete, we can even focus on enhancing API service for all,
and the cool part is, we only need to do it in a single place for all
projects. Think about what happens when Tempest trying to enhance test
performance (just do it and check the gate is green).
Also, what kevin's idea is to have a API service to replace all API
service, which IIUC will be a API server directly use RPC to reach
backend for each services in OpenStack. So it's also different too.

> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



--
May The Force of OpenStack Be With You,
Rico Lin
irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier?

2018-04-25 Thread Jeremy Stanley
On 2018-04-25 18:15:42 +0100 (+0100), Adam Spiers wrote:
> [BTW I hope it's not considered off-bounds for those of us who aren't
> TC election candidates to reply within these campaign question threads
> to responses from the candidates - but if so, let me know and I'll
> shut up ;-) ]
[...]

Not only are responses from everyone in the community welcome (and
like many, I think we should be asking questions like this often
outside the context of election campaigning), but I wholeheartedly
agree with your stance on this topic and also strongly encourage you
to consider running for a seat on the TC in the future if you can
swing it.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Following the new PTI for document build, broken local builds

2018-04-25 Thread Jeremy Stanley
On 2018-04-25 13:10:42 -0400 (-0400), Doug Hellmann wrote:
[...]
> Transitioning jobs is a bit painful because the job definitions and
> job templates are defined in separately places. If we don't want
> this setting to be controlled from a file within the git repo, I
> guess the most expedient thing is to go ahead and make this part
> of the python 3 job transition.

We could provide an experimental job or some basic instructions in
an announcement and just schedule a flag day to start enforcing
everywhere.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat][TripleO] - Getting attributes of openstack resources not created by the stack for TripleO NetworkConfig.

2018-04-25 Thread Harald Jensås
On Tue, 2018-04-24 at 16:12 -0400, Zane Bitter wrote:
> On 19/04/18 08:59, Harald Jensås wrote:
> > The problem is getting there using heat ...
> 
> The real answer is to make everything explicit - create a Subnet 
> resource and a Port resource and don't allow Neutron/Nova to make
> any 
> decisions for you that would have the effect of hiding data that you 
> need. However, since that's impractical in this particular case...
> 
Yeah, I wish the ctlplane network in tripleo was defined in THT. But
since it's created by undercloud installer we are where we are. Moving
it is impractical for the same reasons migrating from server resources
with implicit ports is ...

Another non tripleo use case is when connecting the instance to a
provider network, in this case the network and subnet resource is
beyond the user control. (External resource probably, but there seem to
be the issues Zane mentions below.)

> > a couple of ideas:
> > 
> > a) Use heat's ``external_resource`` to create a port resource,
> > and then  a external subnet resource. Then get the data
> > from the external resources. We probably would have to make
> > it possible for a ``external_resource`` depend on the server
> > resource, and verify that these resource have the required
> > attributes.
> 
> Yeah, I don't know why we don't allow depends_on for resources with 
> external_id. (There's also a bug where we don't recognise
> dependencies 
> contributed by any functions used in the external_id field, like 
> get_resource or get_attr, even though we allow those functions.) 
> Apparently somebody had a brain explosion at a design summit session 
> that nobody remembers attending, and here we are :D
> 
> The difficulty is that the fix should be tied to a template version,
> but 
> the offending check is in the template-independent part of the code
> base.
> 
> Nevertheless, a workaround is trivial:
> 
>ext_port:
>  type: OS::Neutron::Port
>  external_id: {get_attr: [, addresses, , 0,
> port]}
>  metadata:
>do_something_to_add_a_dependency: {get_resource: }
> 
> > b) Extend attributes of OS::Nova::Server (OS::Neutron::Port as
> > well probably) to include the data.
> > 
> > If we do this we should probably aim to be in parity with
> > what is made available to clients getting the configuration
> > from dhcp. (mtu, dns_domain, dns_servers, prefixlen,
> > gateway_ip, host_routes, ipv6_address_mode, ipv6_ra_mode
> > etc.)
> 
> This makes sense to me. If we're allowing people to let Nova/Neutron 
> make implicit choices for them then we also need to allow them to
> see 
> the result.
> 
I like this idea best as well. I will open an rfe against Heat.

> > c) Create a new heat function to read properties of any
> > openstack resource, without having to make use of the
> > external_resource in heat.
> 
> I'm pretty -1 on this, because I think you want to have the same
> caching 
> behaviour as a resource, not a function. At that point you're just 
> implementing syntactic sugar that makes things _less_ consistent, not
> to 
> mention the enormous implementation hacks required.
> 
> cheers,
> Zane.
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier?

2018-04-25 Thread Adam Spiers

[BTW I hope it's not considered off-bounds for those of us who aren't
TC election candidates to reply within these campaign question threads
to responses from the candidates - but if so, let me know and I'll
shut up ;-) ]

Zhipeng Huang  wrote:

Culture wise, being too IRC-centric is definitely not helping, from my own
experience getting new Cyborg developer joining our weekly meeting from
China. Well we could always argue it is part of a open source/hacker
culture and preferable to commercial solutions that have the constant risk
of suddenly being shut down someday. But as OpenStack becomes more
commercialized and widely adopted, we should be aware that more and more
(potential) contributors will come from the groups who are used to
non-strictly open source environment, such as product develop team which
relies on a lot of "closed source" but easy to use softwares.

The change ? Use more video conferences, and more commercial tools that
preferred in certain region. Stop being allergic to non-open source
softwares and bring more capable but not hacker culture inclined
contributors to the community.


I respectfully disagree :-)


I know this is not a super welcomed stance in the open source hacker
culture. But if we want OpenStack to be able to sustain more developers and
not have a mid-life crisis then got fringed, we need to start changing the
hacker mindset.


I think that "the hacker mindset" is too ambiguous and generalized a
concept to be useful in framing justification for change.  From where
I'm standing, the hacker mindset is a wonderful and valuable thing
which should be preserved.

However, if that conflicts with other goals of our community, such as
reducing barrier to entry, then yes that is a valid concern.  In that
case we should examine in more detail the specific aspects of hacker
culture which are discouraging potential new contributors, and try to
fix those, rather than jumping to the assumption that we should
instead switch to commercial tools.  Given the community's "Four
Opens" philosophy and strong belief in the power of Open Source, it
would be inconsistent to abandon our preference for Open Source tools.

For example, proprietary tools such as Slack are not popular because
they are proprietary; they are popular because they have a very
intuitive interface and convenient features which people enjoy.  So
when examining the specific question "What can we do to make it easier
for OpenStack newbies to communicate with the existing community over
a public instant messaging system?", the first question should not be
"Should we switch to a proprietary tool?", but rather "Is there an
open source tool which provides enough of the functionality we need?"

And in fact in the case of instant messaging, I believe the answer is
yes, as I previously pointed out:

   http://lists.openstack.org/pipermail/openstack-sigs/2018-March/000332.html

Similarly, there are plenty of great Open Source solutions for voice
and video communications.

I'm all for changing with the times and adapting workflows to harness
the benefits of more modern tools, but I think it's wrong to
automatically assume that this can only be achieved via proprietary
solutions.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Following the new PTI for document build, broken local builds

2018-04-25 Thread Doug Hellmann
Excerpts from Sean McGinnis's message of 2018-04-25 11:55:43 -0500:
> > > 
> > > [1] https://review.openstack.org/#/c/564232/
> > > 
> > 
> > The only concern I have is that it may slow the transition to the
> > python 3 version of the jobs, since someone would have to actually
> > fix the warnings before they could add the new job. I'm not sure I
> > want to couple the tasks of fixing doc build warnings with also
> > making those docs build under python 3 (which is usually quite
> > simple).
> > 
> > Is there some other way to enable this flag independently of the move to
> > the python3 job?
> > 
> > Doug
> > 
> 
> I did consider just creating a whole new job definition. I could probably do
> that instead, but my hope was those proactive enough to be moving to python 3
> to run their jobs would also be proactive enough that they have already
> addressed doc job warnings.
> 
> We could do two separate jobs, then when everyone is ready, collapse it back 
> to
> one job. I was hoping to jump ahead a little though.
> 

Transitioning jobs is a bit painful because the job definitions and
job templates are defined in separately places. If we don't want
this setting to be controlled from a file within the git repo, I
guess the most expedient thing is to go ahead and make this part
of the python 3 job transition.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Following the new PTI for document build, broken local builds

2018-04-25 Thread Sean McGinnis
> >>
> >>[1] https://review.openstack.org/#/c/564232/
> >>
> >
> >The only concern I have is that it may slow the transition to the
> >python 3 version of the jobs, since someone would have to actually
> >fix the warnings before they could add the new job. I'm not sure I
> >want to couple the tasks of fixing doc build warnings with also
> >making those docs build under python 3 (which is usually quite
> >simple).
> >
> >Is there some other way to enable this flag independently of the move to
> >the python3 job?
> 
> The existing proposal is:
> 
> https://review.openstack.org/559348
> 
> TL;DR if you still have a build_sphinx section in setup.cfg then defaults
> will remain the same, but when removing it as part of the transition to the
> new PTI you'll have to eliminate any warnings. (Although AFAICT it doesn't
> hurt to leave that section in place as long as you need, and you can still
> do the rest of the PTI conversion.)
> 
> The hold-up is that the job in question is also potentially used by other
> Zuul users outside of OpenStack - including those who aren't using pbr at
> all (i.e. there's no setup.cfg). So we need to warn those folks to prepare.
> 
> cheers,
> Zane.
> 

Ah, I had looked but did not find an existing proposal. Looks like that would
work too. I am good either way, but I will leave my approach out there just as
another option to consider. I'll abandon that if folks prefer this way.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Following the new PTI for document build, broken local builds

2018-04-25 Thread Zane Bitter

On 25/04/18 11:41, Doug Hellmann wrote:

Excerpts from Sean McGinnis's message of 2018-04-25 09:59:13 -0500:


I'd be more in favour of changing the zuul job to build with the '-W'
flag. To be honest, there is no good reason to not have this flag
enabled. I'm not sure that will be a popular opinion though as it may
break some projects' builds (correctly, but still).

I'll propose a patch against zuul-jobs and see what happens :)

Stephen



I am in favor of this too. We will probably need to give some teams some time
to get warnings fixed though. I haven't done any kind of extensive audit of
projects, but from a few I looked through, there are definitely a few that are
not erroring on warnings and are likely to be blocked if we suddenly flipped
the switch and errored on those.

This is a legitimate issue though. In Cinder we had -W in the tox docs job, but
since that is no longer being enforced in the gate, running "tox -e docs" from
a fresh clone of master was failing. We really do need some way to enforce this
so things like that do not happen.


This. While forcing work on teams to do busywork is undeniably A Very
Bad Thing (TM), I do think the longer we leave this, the worse it'll
get. The zuul-jobs [1] patch will probably introduce some pain for
projects but it seems like inevitable pain and we're in the right part
of the cycle in which to do something like this. I'd be willing to help
projects fix issues they encounter, which I expect will be minimal for
most projects.


I too think enforcing -W is the way to go, so count me in for the
broken docs build help.

Thanks for pushing this forward!

Cheers,
pk



In support of this I have proposed [1]. To make it easier to transition (since
I'm pretty sure this will involve a lot of work by some projects) and since we
want to eventually have everything run under Python 3, I have just proposed
setting this flag as the default for the publish-openstack-sphinx-docs-python3
job template. Then projects can opt in as they are ready for both the
warnings-as-errors and Python 3 support.

I would love to hear if there are any concerns about doing things this way or
if anyone has any better suggestions.

Thanks!
Sean

[1] https://review.openstack.org/#/c/564232/



The only concern I have is that it may slow the transition to the
python 3 version of the jobs, since someone would have to actually
fix the warnings before they could add the new job. I'm not sure I
want to couple the tasks of fixing doc build warnings with also
making those docs build under python 3 (which is usually quite
simple).

Is there some other way to enable this flag independently of the move to
the python3 job?


The existing proposal is:

https://review.openstack.org/559348

TL;DR if you still have a build_sphinx section in setup.cfg then 
defaults will remain the same, but when removing it as part of the 
transition to the new PTI you'll have to eliminate any warnings. 
(Although AFAICT it doesn't hurt to leave that section in place as long 
as you need, and you can still do the rest of the PTI conversion.)


The hold-up is that the job in question is also potentially used by 
other Zuul users outside of OpenStack - including those who aren't using 
pbr at all (i.e. there's no setup.cfg). So we need to warn those folks 
to prepare.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Following the new PTI for document build, broken local builds

2018-04-25 Thread Sean McGinnis
> > 
> > [1] https://review.openstack.org/#/c/564232/
> > 
> 
> The only concern I have is that it may slow the transition to the
> python 3 version of the jobs, since someone would have to actually
> fix the warnings before they could add the new job. I'm not sure I
> want to couple the tasks of fixing doc build warnings with also
> making those docs build under python 3 (which is usually quite
> simple).
> 
> Is there some other way to enable this flag independently of the move to
> the python3 job?
> 
> Doug
> 

I did consider just creating a whole new job definition. I could probably do
that instead, but my hope was those proactive enough to be moving to python 3
to run their jobs would also be proactive enough that they have already
addressed doc job warnings.

We could do two separate jobs, then when everyone is ready, collapse it back to
one job. I was hoping to jump ahead a little though.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][horizon][neutron] plugins depending on services

2018-04-25 Thread Jeremy Stanley
On 2018-04-25 12:03:54 -0400 (-0400), Doug Hellmann wrote:
[...]
> Especially now with lower-constraints jobs in place, having plugins
> rely on features only available in unreleased versions of service
> projects doesn't make a lot of sense. We test that way *between*
> services using integration tests that use the REST APIs, but we
> also have some pretty strong stability requirements in place for
> those APIs.
[...]

This came up again a few days ago for sahara-dashboard. We talked
through some obvious alternatives to keep its master branch from
depending on an unreleased state of horizon and the situation today
is that plugin developers have been relying on developing their
releases in parallel with the services. Not merging an entire
development cycle's worth of work until release day (whether that's
by way of a feature branch or by just continually rebasing and
stacking in Gerrit) would be a very painful workflow for them, and
having to wait a full release cycle before they could start
integrating support for new features in the service would be equally
unfortunate.

As for merging the plugin and service repositories, they tend to be
developed by completely disparate teams so that could require a fair
amount of political work to solve. Extracting the plugin interface
into a separate library which releases more frequently than the
service does indeed sound like the sanest option, but will also
probably take quite a while for some teams to achieve (I gather
neutron-lib is getting there, but I haven't heard about any work
toward that end in Horizon yet).
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral] September PTG in Denver

2018-04-25 Thread Kendall Nelson
Hey Sean :)

The reason why we picked the May 2nd date was so that people would know if
they needed to register before the early bird pricing closes. If groups
feel like they need more time to decide that's fine. It would still be
helpful if those needing more time could fill the survey with the 'Maybe,
Still Deciding' answer so I can circle back later for a hard 'Yes,
Absolutely' or a 'No, Certainly Not' :)

-Kendall (diablo_rojo)

On Mon, Apr 23, 2018 at 12:58 PM Sean McGinnis 
wrote:

> On Mon, Apr 23, 2018 at 07:32:40PM +, Kendall Nelson wrote:
> > Hey Dougal,
> >
> > I think I had said May 2nd in my initial email asking about attendance.
> If
> > you can get an answer out of your team by then I would greatly appreciate
> > it! If you need more time please let me know by then (May 2nd) instead.
> >
> > -Kendall (diablo_rojo)
> >
>
> Do we need to collect this data for September already by the beginning of
> May?
>
> Granted, the sooner we know details and can start planning, the better.
> But as
> I started looking over the survey, it just seems really early to predict
> where
> things will be 5 months from now. Especially considering we will have a
> different set of PTLs for many projects by then, and it is too early for
> some
> of those hand off discussions to have started yet.
>
> Sean
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra] How to take over a project?

2018-04-25 Thread Miguel Lavalle
Hi,

Ihar raises a valid issue. In the spirit of preventing this request from
falling through the cracks, I reached out to Clark Boylan in the infra IRC
channel (
http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2018-04-25.log.html#t2018-04-25T15:46:41).
We decided to contact Vikram directly in the email account he has
registered in Gerrit. I just sent him a message, copying Clark and Sangho.
If he responds, Sangho can coordinate with him. If he doesn't after a week
or two, then we can switch to Sangho, who is a member of the ONOS core team.

Regards

Miguel

On Wed, Apr 25, 2018 at 9:46 AM, Ihar Hrachyshka 
wrote:

> ONOS is not part of Neutron and hence Neutron Release team should not
> be involved in its matters. If gerrit ACLs say otherwise, you should
> fix the ACLs.
>
> Ihar
>
> On Tue, Apr 24, 2018 at 1:22 AM, Sangho Shin 
> wrote:
> > Dear Neutron-Release team members,
> >
> > Can any of you handle the issue below?
> >
> > Thank you so much for your help in advance.
> >
> > Sangho
> >
> >
> >> On 20 Apr 2018, at 10:01 AM, Sangho Shin 
> wrote:
> >>
> >> Dear Neutron-Release team,
> >>
> >> I wonder if any of you can add me to the network-onos-release member.
> >> It seems that Vikram is busy. :-)
> >>
> >> Thank you,
> >>
> >> Sangho
> >>
> >>
> >>
> >>> On 19 Apr 2018, at 9:18 AM, Sangho Shin 
> wrote:
> >>>
> >>> Ian,
> >>>
> >>> Thank you so much for your help.
> >>> I have requested Vikram to add me to the release team.
> >>> He should be able to help me. :-)
> >>>
> >>> Sangho
> >>>
> >>>
>  On 19 Apr 2018, at 8:36 AM, Ian Wienand  wrote:
> 
>  On 04/19/2018 01:19 AM, Ian Y. Choi wrote:
> > By the way, since the networking-onos-release group has no neutron
> > release team group, I think infra team can help to include neutron
> > release team and neutron release team can help to create branches
> > for the repo if there is no reponse from current
> > networking-onos-release group member.
> 
>  This seems sane and I've added neutron-release to
>  networking-onos-release.
> 
>  I'm hesitant to give advice on branching within a project like neutron
>  as I'm sure there's stuff I'm not aware of; but members of the
>  neutron-release team should be able to get you going.
> 
>  Thanks,
> 
>  -i
> >>>
> >>
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Help needed in debugging issue - ClientException: Unable to update the attachment. (HTTP 500)

2018-04-25 Thread Sreeram Vancheeswaran



On 25/04/18 9:09 PM, Matt Riedemann wrote:

On 4/25/2018 10:34 AM, Sreeram Vancheeswaran wrote:
Thank you so much Matt for the detailed steps.  We are doing boot 
from image and are probably running into the issue mentioned in [2] 
in your email.


Hmm, OK, but that doesn't really make sense how you're going down this 
path [1] in the code because the API doesn't create a volume 
attachment record when booting from a volume where the 
source_type='image', so it should be going down the "legacy" attach 
flow where attachment_update is not called.


Do you have some proprietary code in place that might be causing some 
problems? Otherwise we need to figure out how this is failing because 
it could be an issue in Queens.


[1] 
https://github.com/openstack/nova/blob/0a642e2eee8d430ddcccf2947aedcca1a0a0b005/nova/virt/block_device.py#L597


Yes, we have some proprietary code to copy image on to volume (as a 
prototype now), and probably that is causing the issue here; I will 
debug/trace using the info you previously provided and figure out the 
root cause.  I will get back to you if there is some issue in the 
"in-tree" code.   Again, thank you so much for providing directions on 
where to continue debugging.



--
---
Sreeram Vancheeswaran
System z Firmware - Openstack Development
IBM Systems & Technology Lab, Bangalore, India
Phone:  +91 80 40660826 Mob: +91-9341411511
Email : sree...@linux.vnet.ibm.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Monthly bug day?

2018-04-25 Thread Julia Kreger
On Mon, Apr 23, 2018 at 12:04 PM, Michael Turek
 wrote:

> What does everyone think about having Bug Day the first Thursday of every
> month?

All for it!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][horizon][neutron] plugins depending on services

2018-04-25 Thread Doug Hellmann
Excerpts from Monty Taylor's message of 2018-04-25 16:40:47 +0200:
> Hi everybody,
> 
> We've been working on navigating through from an interesting situation 
> over the past few months, but there isn't a top-level overview of what's 
> going on with it. That's my bad - I've been telling AJaeger I was going 
> to send an email out for a while.
> 
> projects with test requirements on git repo urls of other projects
> --
> 
> There are a bunch of projects that need, for testing purposes, to depend 
> on other projects. The majority are either neutron or horizon plugins, 
> but conceptually there is nothing neutron or horizon specific about the 
> issue. The problem they're trying to deal with is that they are a plugin 
> to a service and they need to be able to import code from the service 
> they are a plugin to in their unit tests.
> 
> To make things even more complicated, some of the plugins actually 
> duepend on each other for real, not just as a "we need this for testing"
> 
> There is trouble in paradise though - which is that we don't allow git 
> urls in requirements files. To work around this, the projects in 
> question added additional pip install lines to a tox_install.sh script - 
> essentially bypassing the global-requirements process and system 
> completely.
> 
> This went unnoticed in a general sense until we started working through 
> removing the use of zuul-cloner which is not needed any longer in Zuul v3.
> 
> unwinding things
> 
> 
> There are a few different options, but it's important to keep in mind 
> that we ultimately want all of the following:
> 
> * The code works
> * Tests can run properly in CI
> * "Depends-On" works in CI so that you can test changes cross-repo
> * Tests can run properly locally for developers
> * Deployment requirements are accurately communicated to deployers
> 
> The approach so far
> ---
> 
> The approach so far has been releasing service projects to PyPI and 
> reworking the projects to depend on those releases.
> 
> This approach takes advantage of the tox-siblings feature in the gate to 
> ensure we're cross-testing master of projects with each other.
> 
> tox-siblings
> ---
> 
> There is a feature in the Zuul tox jobs we refer to as "tox-siblings" 
> (this is because historically - wow we have historical context for zuul 
> v3 now - it was implemented as a separate role) What it does is ensure 
> that if you are running a tox job and you add additional projects to 
> required-projects in the job config, that the git versions of those 
> projects will be installed into the tox virtualenv - but only for 
> projects that would have been installed by tox otherwise. This way 
> required-projects is both safe to use and has the effect you'd expect.
> 
> tox-siblings is intended to enable ADDITIONALLY cross-testing projects 
> that otherwise have a normal dependency relationship in the gate. People 
> have been adding jobs like cross-something-something or something-tips 
> in an ad-hoc manner for a while - and in many cases the git parts of 
> that were actually somewhat not correct - so this is an attempt to 
> provide the thing people want in those scenarios in a consistent manner. 
> But it always should be helper logic for more complex gate jobs, not as 
> a de-facto part of a project's basic install.
> 
> Current Approach is wrong
> 
> 
> Unfortunately, as part of trying to unwind the plugins situation, we've 
> walked ourselves into a situation where the gate is the only thing that 
> has the correct installation information for some projects, and that's 
> not good.
> 
>  From a networking plugin approach the "depend on release and use 
> tox-siblings" assumes that 'depend on release of neutron' is or can be 
> the common case with the ability to add a second tox job to check master 
> against master.
> 
> If that's not a real thing, then depending on releases + tox_siblings in 
> the gate is solving the wrong problem.
> 
> Specific Suggestions
> 
> 
> As there are a few different scenarios, I want to suggest we do a few 
> different things.
> 
> * Prefer interface libraries on PyPI that projects depend on
> 
> Like python-openstackclient and osc-lib, this is the *best* approach
> for projects with plugins. Such interface libraries need to be able to 
> do intermediate releases - and those intermediate releases need to not 
> break the released version of the projects. This is the hardest and 
> longest thing to do as well, so it's most likely to be a multi-cycle effort.
> 
> * Treat inter-plugin depends as normal library depends
> 
> If networking-bgpvpn depends on networking-bagpipe and networking-odl, 
> then networking-bagpipe and networking-odl need to be released to PyPI 
> just like any other library in OpenStack. These are real runtime 
> dependencies.
> 
> Yes, this is more coordination work, 

Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

2018-04-25 Thread Ben Nemec



On 04/25/2018 10:28 AM, James Slagle wrote:

On Wed, Apr 25, 2018 at 10:55 AM, Dmitry Tantsur  wrote:

On 04/25/2018 04:26 PM, James Slagle wrote:


On Wed, Apr 25, 2018 at 9:14 AM, Dmitry Tantsur 
wrote:


Hi all,

I'd like to restart conversation on enabling node automated cleaning by
default for the undercloud. This process wipes partitioning tables
(optionally, all the data) from overcloud nodes each time they move to
"available" state (i.e. on initial enrolling and after each tear down).

We have had it disabled for a few reasons:
- it was not possible to skip time-consuming wiping if data from disks
- the way our workflows used to work required going between manageable
and
available steps several times

However, having cleaning disabled has several issues:
- a configdrive left from a previous deployment may confuse cloud-init
- a bootable partition left from a previous deployment may take
precedence
in some BIOS
- an UEFI boot partition left from a previous deployment is likely to
confuse UEFI firmware
- apparently ceph does not work correctly without cleaning (I'll defer to
the storage team to comment)

For these reasons we don't recommend having cleaning disabled, and I
propose
to re-enable it.

It has the following drawbacks:
- The default workflow will require another node boot, thus becoming
several
minutes longer (incl. the CI)
- It will no longer be possible to easily restore a deleted overcloud
node.



I'm trending towards -1, for these exact reasons you list as
drawbacks. There has been no shortage of occurrences of users who have
ended up with accidentally deleted overclouds. These are usually
caused by user error or unintended/unpredictable Heat operations.
Until we have a way to guarantee that Heat will never delete a node,
or Heat is entirely out of the picture for Ironic provisioning, then
I'd prefer that we didn't enable automated cleaning by default.

I believe we had done something with policy.json at one time to
prevent node delete, but I don't recall if that protected from both
user initiated actions and Heat actions. And even that was not enabled
by default.

IMO, we need to keep "safe" defaults. Even if it means manually
documenting that you should clean to prevent the issues you point out
above. The alternative is to have no way to recover deleted nodes by
default.



Well, it's not clear what is "safe" here: protect people who explicitly
delete their stacks or protect people who don't realize that a previous
deployment may screw up their new one in a subtle way.


The latter you can recover from, the former you can't if automated
cleaning is true.

It's not just about people who explicitly delete their stacks (whether
intentional or not). There could be user error (non-explicit) or
side-effects triggered by Heat that could cause nodes to get deleted.

You couldn't recover from those scenarios if automated cleaning were
true. Whereas you could always fix a deployment error by opting in to
do an automated clean. Does Ironic keep track of it a node has been
previously cleaned? Could we add a validation to check whether any
nodes might be used in the deployment that were not previously
cleaned?


Is there a way to only do cleaning right before a node is deployed?  If 
you're about to write a new image to the disk then any data there is 
forfeit anyway.  Since the concern is old data on the disk messing up 
subsequent deploys, it doesn't really matter whether you clean it right 
after it's deleted or right before it's deployed, but the latter leaves 
the data intact for longer in case a mistake was made.


If that's not possible then consider this an RFE. :-)

-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Following the new PTI for document build, broken local builds

2018-04-25 Thread Doug Hellmann
Excerpts from Sean McGinnis's message of 2018-04-25 09:59:13 -0500:
> > > > > 
> > > > > I'd be more in favour of changing the zuul job to build with the '-W'
> > > > > flag. To be honest, there is no good reason to not have this flag
> > > > > enabled. I'm not sure that will be a popular opinion though as it may
> > > > > break some projects' builds (correctly, but still).
> > > > > 
> > > > > I'll propose a patch against zuul-jobs and see what happens :)
> > > > > 
> > > > > Stephen
> > > > > 
> > > > 
> > > > I am in favor of this too. We will probably need to give some teams 
> > > > some time
> > > > to get warnings fixed though. I haven't done any kind of extensive 
> > > > audit of
> > > > projects, but from a few I looked through, there are definitely a few 
> > > > that are
> > > > not erroring on warnings and are likely to be blocked if we suddenly 
> > > > flipped
> > > > the switch and errored on those.
> > > >
> > > > This is a legitimate issue though. In Cinder we had -W in the tox docs 
> > > > job, but
> > > > since that is no longer being enforced in the gate, running "tox -e 
> > > > docs" from
> > > > a fresh clone of master was failing. We really do need some way to 
> > > > enforce this
> > > > so things like that do not happen.
> > > 
> > > This. While forcing work on teams to do busywork is undeniably A Very
> > > Bad Thing (TM), I do think the longer we leave this, the worse it'll
> > > get. The zuul-jobs [1] patch will probably introduce some pain for
> > > projects but it seems like inevitable pain and we're in the right part
> > > of the cycle in which to do something like this. I'd be willing to help
> > > projects fix issues they encounter, which I expect will be minimal for
> > > most projects.
> > 
> > I too think enforcing -W is the way to go, so count me in for the
> > broken docs build help.
> > 
> > Thanks for pushing this forward!
> > 
> > Cheers,
> > pk
> > 
> 
> In support of this I have proposed [1]. To make it easier to transition (since
> I'm pretty sure this will involve a lot of work by some projects) and since we
> want to eventually have everything run under Python 3, I have just proposed
> setting this flag as the default for the publish-openstack-sphinx-docs-python3
> job template. Then projects can opt in as they are ready for both the
> warnings-as-errors and Python 3 support.
> 
> I would love to hear if there are any concerns about doing things this way or
> if anyone has any better suggestions.
> 
> Thanks!
> Sean
> 
> [1] https://review.openstack.org/#/c/564232/
> 

The only concern I have is that it may slow the transition to the
python 3 version of the jobs, since someone would have to actually
fix the warnings before they could add the new job. I'm not sure I
want to couple the tasks of fixing doc build warnings with also
making those docs build under python 3 (which is usually quite
simple).

Is there some other way to enable this flag independently of the move to
the python3 job?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Help needed in debugging issue - ClientException: Unable to update the attachment. (HTTP 500)

2018-04-25 Thread Matt Riedemann

On 4/25/2018 10:34 AM, Sreeram Vancheeswaran wrote:
Thank you so much Matt for the detailed steps.  We are doing boot from 
image and are probably running into the issue mentioned in [2] in your 
email.


Hmm, OK, but that doesn't really make sense how you're going down this 
path [1] in the code because the API doesn't create a volume attachment 
record when booting from a volume where the source_type='image', so it 
should be going down the "legacy" attach flow where attachment_update is 
not called.


Do you have some proprietary code in place that might be causing some 
problems? Otherwise we need to figure out how this is failing because it 
could be an issue in Queens.


[1] 
https://github.com/openstack/nova/blob/0a642e2eee8d430ddcccf2947aedcca1a0a0b005/nova/virt/block_device.py#L597


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Help needed in debugging issue - ClientException: Unable to update the attachment. (HTTP 500)

2018-04-25 Thread Sreeram Vancheeswaran



On 25/04/18 7:40 PM, Matt Riedemann wrote:

On 4/25/2018 3:32 AM, Sreeram Vancheeswaran wrote:

Hi team!

We are currently facing  an issue in our out-of-tree driver nova-dpm 
[1] with nova and cinder on master, where instance launch in devstack 
is failing due to communication/time-out issues in nova-cinder.   We 
are unable to get to the root cause of the issue and we need your 
help on getting some hints/directions to debug this issue further.


--> From nova-compute service: BuildAbortException: Build of instance 
aborted: Unable to update the attachment. (HTTP 500) from the 
nova-compute server (detailed logs here [2]).


--> From cinder-volume service: ERROR oslo_messaging.rpc.server 
VolumeAttachmentNotFound: Volume attachment could not be found with 
filter: attachment_id = 266ef7e1-4735-40f1-b704-509472f565cb.
Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR 
oslo_messaging.rpc.server  (detailed logs here [3])


Debugging steps done so far:-

  * Compared the package versions between the current devstack under
test with the **last succeeding job in our CI system** (to be exact,
it was for the patches https://review.openstack.org/#/c/458514/ and
https://review.openstack.org/#/c/458820/); However the package
versions for packages such as sqlalchemy, os-brick, oslo* are
exactly the same in both the systems.
  * We used git bisect to revert nova and cinder projects to versions
equal to or before the date of our last succeeding CI run; but still
we were able to reproduce the same error.
  * Our guess is that the db "Save" operation during the update of
volume attachment is failing.  But we are unable to trace/debug to
that point in the rpc call;  Any suggestions on how to debug this
sceario would be really helpful.
  * We are running devstack master on Ubuntu 16.04.04


References

[1] https://github.com/openstack/nova-dpm


[2] Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR 
nova.volume.cinder [None req-751d4586-cd97-4a8f-9423-f2bc4b1f1269 
service nova] Update attachment failed for attachment 
266ef7e1-4735-40f1-b704-509472f565cb. Error: Unable to update the 
attachment. (HTTP 500) (Request-ID: 
req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce) Code: 500: ClientException: 
Unable to update the attachment. (HTTP 500) (Request-ID: 
req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce)
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[None req-751d4586-cd97-4a8f-9423-f2bc4b1f1269 service nova] 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] Instance failed 
block device setup: ClientException: Unable to update the attachment. 
(HTTP 500) (Request-ID: req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce)
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] Traceback (most 
recent call last):
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1577, in 
_prep_block_device
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] 
wait_func=self._await_block_device_map_created)
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 828, in 
attach_block_devices
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] _log_and_attach(device)
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 825, in 
_log_and_attach
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] 
bdm.attach(*attach_args, **attach_kwargs)
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 46, in wrapped
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] ret_val = 
method(obj, context, *args, **kwargs)
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 618, in attach
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] virt_driver, 
do_driver_attach)
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", 
line 274, in inner
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]

Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

2018-04-25 Thread James Slagle
On Wed, Apr 25, 2018 at 10:55 AM, Dmitry Tantsur  wrote:
> On 04/25/2018 04:26 PM, James Slagle wrote:
>>
>> On Wed, Apr 25, 2018 at 9:14 AM, Dmitry Tantsur 
>> wrote:
>>>
>>> Hi all,
>>>
>>> I'd like to restart conversation on enabling node automated cleaning by
>>> default for the undercloud. This process wipes partitioning tables
>>> (optionally, all the data) from overcloud nodes each time they move to
>>> "available" state (i.e. on initial enrolling and after each tear down).
>>>
>>> We have had it disabled for a few reasons:
>>> - it was not possible to skip time-consuming wiping if data from disks
>>> - the way our workflows used to work required going between manageable
>>> and
>>> available steps several times
>>>
>>> However, having cleaning disabled has several issues:
>>> - a configdrive left from a previous deployment may confuse cloud-init
>>> - a bootable partition left from a previous deployment may take
>>> precedence
>>> in some BIOS
>>> - an UEFI boot partition left from a previous deployment is likely to
>>> confuse UEFI firmware
>>> - apparently ceph does not work correctly without cleaning (I'll defer to
>>> the storage team to comment)
>>>
>>> For these reasons we don't recommend having cleaning disabled, and I
>>> propose
>>> to re-enable it.
>>>
>>> It has the following drawbacks:
>>> - The default workflow will require another node boot, thus becoming
>>> several
>>> minutes longer (incl. the CI)
>>> - It will no longer be possible to easily restore a deleted overcloud
>>> node.
>>
>>
>> I'm trending towards -1, for these exact reasons you list as
>> drawbacks. There has been no shortage of occurrences of users who have
>> ended up with accidentally deleted overclouds. These are usually
>> caused by user error or unintended/unpredictable Heat operations.
>> Until we have a way to guarantee that Heat will never delete a node,
>> or Heat is entirely out of the picture for Ironic provisioning, then
>> I'd prefer that we didn't enable automated cleaning by default.
>>
>> I believe we had done something with policy.json at one time to
>> prevent node delete, but I don't recall if that protected from both
>> user initiated actions and Heat actions. And even that was not enabled
>> by default.
>>
>> IMO, we need to keep "safe" defaults. Even if it means manually
>> documenting that you should clean to prevent the issues you point out
>> above. The alternative is to have no way to recover deleted nodes by
>> default.
>
>
> Well, it's not clear what is "safe" here: protect people who explicitly
> delete their stacks or protect people who don't realize that a previous
> deployment may screw up their new one in a subtle way.

The latter you can recover from, the former you can't if automated
cleaning is true.

It's not just about people who explicitly delete their stacks (whether
intentional or not). There could be user error (non-explicit) or
side-effects triggered by Heat that could cause nodes to get deleted.

You couldn't recover from those scenarios if automated cleaning were
true. Whereas you could always fix a deployment error by opting in to
do an automated clean. Does Ironic keep track of it a node has been
previously cleaned? Could we add a validation to check whether any
nodes might be used in the deployment that were not previously
cleaned?



-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [masakari] Masakari Project Meeting time

2018-04-25 Thread Kwan, Louie
Sampath, Dinesh and others,

It was a good meeting last week.

As briefly discussed with Sampath, I would like to check whether we can adjust 
the meeting time. 

We are at EST time zone, the meeting is right on our midnight time, 12:00 am.

It will be nice if the meeting can be started ~2 hours earlier e.g. Could it be 
 started at 02:00: UTC instead?

Thanks.
Louie



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Following the new PTI for document build, broken local builds

2018-04-25 Thread Sean McGinnis
> > > > 
> > > > I'd be more in favour of changing the zuul job to build with the '-W'
> > > > flag. To be honest, there is no good reason to not have this flag
> > > > enabled. I'm not sure that will be a popular opinion though as it may
> > > > break some projects' builds (correctly, but still).
> > > > 
> > > > I'll propose a patch against zuul-jobs and see what happens :)
> > > > 
> > > > Stephen
> > > > 
> > > 
> > > I am in favor of this too. We will probably need to give some teams some 
> > > time
> > > to get warnings fixed though. I haven't done any kind of extensive audit 
> > > of
> > > projects, but from a few I looked through, there are definitely a few 
> > > that are
> > > not erroring on warnings and are likely to be blocked if we suddenly 
> > > flipped
> > > the switch and errored on those.
> > >
> > > This is a legitimate issue though. In Cinder we had -W in the tox docs 
> > > job, but
> > > since that is no longer being enforced in the gate, running "tox -e docs" 
> > > from
> > > a fresh clone of master was failing. We really do need some way to 
> > > enforce this
> > > so things like that do not happen.
> > 
> > This. While forcing work on teams to do busywork is undeniably A Very
> > Bad Thing (TM), I do think the longer we leave this, the worse it'll
> > get. The zuul-jobs [1] patch will probably introduce some pain for
> > projects but it seems like inevitable pain and we're in the right part
> > of the cycle in which to do something like this. I'd be willing to help
> > projects fix issues they encounter, which I expect will be minimal for
> > most projects.
> 
> I too think enforcing -W is the way to go, so count me in for the
> broken docs build help.
> 
> Thanks for pushing this forward!
> 
> Cheers,
> pk
> 

In support of this I have proposed [1]. To make it easier to transition (since
I'm pretty sure this will involve a lot of work by some projects) and since we
want to eventually have everything run under Python 3, I have just proposed
setting this flag as the default for the publish-openstack-sphinx-docs-python3
job template. Then projects can opt in as they are ready for both the
warnings-as-errors and Python 3 support.

I would love to hear if there are any concerns about doing things this way or
if anyone has any better suggestions.

Thanks!
Sean

[1] https://review.openstack.org/#/c/564232/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

2018-04-25 Thread Dmitry Tantsur

On 04/25/2018 04:26 PM, James Slagle wrote:

On Wed, Apr 25, 2018 at 9:14 AM, Dmitry Tantsur  wrote:

Hi all,

I'd like to restart conversation on enabling node automated cleaning by
default for the undercloud. This process wipes partitioning tables
(optionally, all the data) from overcloud nodes each time they move to
"available" state (i.e. on initial enrolling and after each tear down).

We have had it disabled for a few reasons:
- it was not possible to skip time-consuming wiping if data from disks
- the way our workflows used to work required going between manageable and
available steps several times

However, having cleaning disabled has several issues:
- a configdrive left from a previous deployment may confuse cloud-init
- a bootable partition left from a previous deployment may take precedence
in some BIOS
- an UEFI boot partition left from a previous deployment is likely to
confuse UEFI firmware
- apparently ceph does not work correctly without cleaning (I'll defer to
the storage team to comment)

For these reasons we don't recommend having cleaning disabled, and I propose
to re-enable it.

It has the following drawbacks:
- The default workflow will require another node boot, thus becoming several
minutes longer (incl. the CI)
- It will no longer be possible to easily restore a deleted overcloud node.


I'm trending towards -1, for these exact reasons you list as
drawbacks. There has been no shortage of occurrences of users who have
ended up with accidentally deleted overclouds. These are usually
caused by user error or unintended/unpredictable Heat operations.
Until we have a way to guarantee that Heat will never delete a node,
or Heat is entirely out of the picture for Ironic provisioning, then
I'd prefer that we didn't enable automated cleaning by default.

I believe we had done something with policy.json at one time to
prevent node delete, but I don't recall if that protected from both
user initiated actions and Heat actions. And even that was not enabled
by default.

IMO, we need to keep "safe" defaults. Even if it means manually
documenting that you should clean to prevent the issues you point out
above. The alternative is to have no way to recover deleted nodes by
default.


Well, it's not clear what is "safe" here: protect people who explicitly delete 
their stacks or protect people who don't realize that a previous deployment may 
screw up their new one in a subtle way.










__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][horizon][neutron] plugins depending on services

2018-04-25 Thread Jeremy Stanley
On 2018-04-25 16:40:47 +0200 (+0200), Monty Taylor wrote:
[...]
> * Relax our rules about git repos in test-requirements.txt
> 
> Introduce a whitelist of git repo urls, starting with:
> 
>   * https://git.openstack.org/openstack/neutron
>   * https://git.openstack.org/openstack/horizon
> 
> For the service projects that have plugins that need to test against the
> service they're intending to be used with in a real installation. For those
> plugin projects, actually put the git urls into test-requirements.txt. This
> will make the gate work AND local development work for the scenarios where
> the thing that is actually needed is always testing against tip of a
> corresponding service.
[...]

If this is limited to test-requirements.txt and doesn't spill over
into requirements.txt then it _might_ be okay, though it still seems
like we'd need some sort of transition around release stable
branching to indicate what URLs should be used to correspond to
those branches. We've been doing basically that with release-time
edits to per-repo tox-install.sh scripts already, so maybe the same
workflow can be reused on test-requirements.txt a well?
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] campaign question: How should we handle projects with overlapping feature sets?

2018-04-25 Thread Sean McGinnis
> 
> Our current policy regarding Open Development is that a project
> should cooperate with existing projects "rather than gratuitously
> competing or reinventing the wheel." [1] The flexibility provided
> by the use of the term "gratuitously" has allowed us to support
> multiple solutions in the deployment and telemetry problem spaces.
> At the same time it has left us with questions about how (and
> whether) the community would be able to replace the implementation
> of any given component with a new set of technologies by "starting
> from scratch".
> 
> Where do you draw the line at "gratuitous"?

I'm sure I can be swayed in a lot of cases, but I think if a new project can
show that there is a need for the overlap, or at least explain a reasonable
explanation for it, then I would not consider it gratuitous.

For example, if they were addressing a slightly different problem space that
has some additional needs, but as part of meeting those needs they need to have
a foundation or component of it that is an overlap with existing functionality,
then there may be some justification for the overlap.

Ideally, I would first like to see if the project can just use the services of
the other project and just build on top of their APIs to add their additional
functionality. But I know that is not always as easy as it would first appear,
so I think if they can state why that would be impossible, or at least
prohibively difficult, then I think an overlap would be OK.

> 
> What benefits and drawbacks do you see in supporting multiple tools
> with similar features?
> 

It definitely can cause confusion for downstream consumers. Either for those
looking at which services to select for new deployments, or for consumers of
those clouds with knowing what functionality is available to them and how they
access it.

Hopefully more clearly defined constellations would help with that.

A blocker for me would be if the newer project attempt to emulate the API of
the older project, but was not able to provide 100% parity with the existing
functionality. If there is overlap, it needs to be very clearly separated into
a different (although maybe very similar) API and endpoint so we are not
putting this complexity and need for service awareness on the end consumers of
the services.

> How would our community be different, in positive and negative ways,
> if we were more strict about avoiding such overlap?
> 

I think a positive could be that it stimulates more activity in a given area so
that ultimately better and more feature rich services are offered as part of
OpenStack clouds. And as long as it is not just gratuitous, it could enable new
use cases that are not currently possible or outside the scope of any existing
projects.

I really liked the point that Chris made about it possibly revitalizing
developers by having something new and exciting to work on. Or for those
existing projects, maybe getting them excited to work on slightly different use
cases or collaborating with this new project to look at ways they can work
together.

As far as negative, I think it is very similar to what I pointed out above for
deployers and users. It has the potential to cause some confusion in the
community as to where certain functionality should live and where they should
go to if they need to interact or use that functionality in their projects.

One of the negatives brought up in the Glare discussion, since that was brought
up, would be for other projects if they had to add conditional code to
determine if they are interacting with Glance or with Glare for images. I think
that falls under the points earlier about there needs to be a clear separation
and focus on specific use cases so there are not two options doing very similar
things, but with APIs that are not compatible or close but different. I would
hope that we do not allow something like that to happen - at least without a
very good reason for needing to do so.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-infra] How to take over a project?

2018-04-25 Thread Ihar Hrachyshka
ONOS is not part of Neutron and hence Neutron Release team should not
be involved in its matters. If gerrit ACLs say otherwise, you should
fix the ACLs.

Ihar

On Tue, Apr 24, 2018 at 1:22 AM, Sangho Shin  wrote:
> Dear Neutron-Release team members,
>
> Can any of you handle the issue below?
>
> Thank you so much for your help in advance.
>
> Sangho
>
>
>> On 20 Apr 2018, at 10:01 AM, Sangho Shin  wrote:
>>
>> Dear Neutron-Release team,
>>
>> I wonder if any of you can add me to the network-onos-release member.
>> It seems that Vikram is busy. :-)
>>
>> Thank you,
>>
>> Sangho
>>
>>
>>
>>> On 19 Apr 2018, at 9:18 AM, Sangho Shin  wrote:
>>>
>>> Ian,
>>>
>>> Thank you so much for your help.
>>> I have requested Vikram to add me to the release team.
>>> He should be able to help me. :-)
>>>
>>> Sangho
>>>
>>>
 On 19 Apr 2018, at 8:36 AM, Ian Wienand  wrote:

 On 04/19/2018 01:19 AM, Ian Y. Choi wrote:
> By the way, since the networking-onos-release group has no neutron
> release team group, I think infra team can help to include neutron
> release team and neutron release team can help to create branches
> for the repo if there is no reponse from current
> networking-onos-release group member.

 This seems sane and I've added neutron-release to
 networking-onos-release.

 I'm hesitant to give advice on branching within a project like neutron
 as I'm sure there's stuff I'm not aware of; but members of the
 neutron-release team should be able to get you going.

 Thanks,

 -i
>>>
>>
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

2018-04-25 Thread John Fulton
On Wed, Apr 25, 2018 at 9:14 AM, Dmitry Tantsur  wrote:

> Hi all,
>
> I'd like to restart conversation on enabling node automated cleaning by
> default for the undercloud. This process wipes partitioning tables
> (optionally, all the data) from overcloud nodes each time they move to
> "available" state (i.e. on initial enrolling and after each tear down).
>
> We have had it disabled for a few reasons:
> - it was not possible to skip time-consuming wiping if data from disks
> - the way our workflows used to work required going between manageable and
> available steps several times
>
> However, having cleaning disabled has several issues:
> - a configdrive left from a previous deployment may confuse cloud-init
> - a bootable partition left from a previous deployment may take precedence
> in some BIOS
> - an UEFI boot partition left from a previous deployment is likely to
> confuse UEFI firmware
> - apparently ceph does not work correctly without cleaning (I'll defer to
> the storage team to comment)
>

Yes, ceph-disk [1] won't prepare a disk that isn't clean. Deployers new to
Ceph may not realize this and deployment tools which trigger ceph-disk will
fail to prepare the requested OSDs. It may take the deployer time to
realize that is the cause of failure and then they usually enable Ironic's
automated cleaning.


> For these reasons we don't recommend having cleaning disabled, and I
> propose to re-enable it.
>
> It has the following drawbacks:
> - The default workflow will require another node boot, thus becoming
> several minutes longer (incl. the CI)
> - It will no longer be possible to easily restore a deleted overcloud node.
>
> What do you think? If I don't hear principal objections, I'll prepare a
> patch in the coming days.
>

+1

  John

[1] http://docs.ceph.com/docs/hammer/man/8/ceph-disk/



>
> Dmitry
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements][horizon][neutron] plugins depending on services

2018-04-25 Thread Monty Taylor

Hi everybody,

We've been working on navigating through from an interesting situation 
over the past few months, but there isn't a top-level overview of what's 
going on with it. That's my bad - I've been telling AJaeger I was going 
to send an email out for a while.


projects with test requirements on git repo urls of other projects
--

There are a bunch of projects that need, for testing purposes, to depend 
on other projects. The majority are either neutron or horizon plugins, 
but conceptually there is nothing neutron or horizon specific about the 
issue. The problem they're trying to deal with is that they are a plugin 
to a service and they need to be able to import code from the service 
they are a plugin to in their unit tests.


To make things even more complicated, some of the plugins actually 
duepend on each other for real, not just as a "we need this for testing"


There is trouble in paradise though - which is that we don't allow git 
urls in requirements files. To work around this, the projects in 
question added additional pip install lines to a tox_install.sh script - 
essentially bypassing the global-requirements process and system 
completely.


This went unnoticed in a general sense until we started working through 
removing the use of zuul-cloner which is not needed any longer in Zuul v3.


unwinding things


There are a few different options, but it's important to keep in mind 
that we ultimately want all of the following:


* The code works
* Tests can run properly in CI
* "Depends-On" works in CI so that you can test changes cross-repo
* Tests can run properly locally for developers
* Deployment requirements are accurately communicated to deployers

The approach so far
---

The approach so far has been releasing service projects to PyPI and 
reworking the projects to depend on those releases.


This approach takes advantage of the tox-siblings feature in the gate to 
ensure we're cross-testing master of projects with each other.


tox-siblings
---

There is a feature in the Zuul tox jobs we refer to as "tox-siblings" 
(this is because historically - wow we have historical context for zuul 
v3 now - it was implemented as a separate role) What it does is ensure 
that if you are running a tox job and you add additional projects to 
required-projects in the job config, that the git versions of those 
projects will be installed into the tox virtualenv - but only for 
projects that would have been installed by tox otherwise. This way 
required-projects is both safe to use and has the effect you'd expect.


tox-siblings is intended to enable ADDITIONALLY cross-testing projects 
that otherwise have a normal dependency relationship in the gate. People 
have been adding jobs like cross-something-something or something-tips 
in an ad-hoc manner for a while - and in many cases the git parts of 
that were actually somewhat not correct - so this is an attempt to 
provide the thing people want in those scenarios in a consistent manner. 
But it always should be helper logic for more complex gate jobs, not as 
a de-facto part of a project's basic install.


Current Approach is wrong


Unfortunately, as part of trying to unwind the plugins situation, we've 
walked ourselves into a situation where the gate is the only thing that 
has the correct installation information for some projects, and that's 
not good.


From a networking plugin approach the "depend on release and use 
tox-siblings" assumes that 'depend on release of neutron' is or can be 
the common case with the ability to add a second tox job to check master 
against master.


If that's not a real thing, then depending on releases + tox_siblings in 
the gate is solving the wrong problem.


Specific Suggestions


As there are a few different scenarios, I want to suggest we do a few 
different things.


* Prefer interface libraries on PyPI that projects depend on

Like python-openstackclient and osc-lib, this is the *best* approach
for projects with plugins. Such interface libraries need to be able to 
do intermediate releases - and those intermediate releases need to not 
break the released version of the projects. This is the hardest and 
longest thing to do as well, so it's most likely to be a multi-cycle effort.


* Treat inter-plugin depends as normal library depends

If networking-bgpvpn depends on networking-bagpipe and networking-odl, 
then networking-bagpipe and networking-odl need to be released to PyPI 
just like any other library in OpenStack. These are real runtime 
dependencies.


Yes, this is more coordination work, but it's work we do everywhere 
already and it's important.


If we do that for inter-plugin depends, then the normal tox jobs should 
test against the most recent release of the other plugin, and people can 
make a -tips style job like the 

[openstack-dev] [all] Gerrit server replacement scheduled for May 2nd 2018

2018-04-25 Thread Paul Belanger
On Thu, Apr 19, 2018 at 11:49:12AM -0400, Paul Belanger wrote:
Hello from Infra.

This is our weekly reminder of the upcoming gerrit replacement.  We'll continue
to send these announcements out up until the day of the migration. We are now 1
weeks away from replacement date.

If you have any questions, please contact us in #openstack-infra.

---

It's that time again... on Wednesday, May 02, 2018 20:00 UTC, the OpenStack
Project Infrastructure team is upgrading the server which runs
review.openstack.org to Ubuntu Xenial, and that means a new virtual machine
instance with new IP addresses assigned by our service provider. The new IP
addresses will be as follows:

IPv4 -> 104.130.246.32
IPv6 -> 2001:4800:7819:103:be76:4eff:fe04:9229

They will replace these current production IP addresses:

IPv4 -> 104.130.246.91
IPv6 -> 2001:4800:7819:103:be76:4eff:fe05:8525

We understand that some users may be running from egress-filtered
networks with port 29418/tcp explicitly allowed to the current
review.openstack.org IP addresses, and so are providing this
information as far in advance as we can to allow them time to update
their firewalls accordingly.

Note that some users dealing with egress filtering may find it
easier to switch their local configuration to use Gerrit's REST API
via HTTPS instead, and the current release of git-review has support
for that workflow as well.
http://lists.openstack.org/pipermail/openstack-dev/2014-September/045385.html

We will follow up with final confirmation in subsequent announcements.

Thanks,
Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

2018-04-25 Thread James Slagle
On Wed, Apr 25, 2018 at 9:14 AM, Dmitry Tantsur  wrote:
> Hi all,
>
> I'd like to restart conversation on enabling node automated cleaning by
> default for the undercloud. This process wipes partitioning tables
> (optionally, all the data) from overcloud nodes each time they move to
> "available" state (i.e. on initial enrolling and after each tear down).
>
> We have had it disabled for a few reasons:
> - it was not possible to skip time-consuming wiping if data from disks
> - the way our workflows used to work required going between manageable and
> available steps several times
>
> However, having cleaning disabled has several issues:
> - a configdrive left from a previous deployment may confuse cloud-init
> - a bootable partition left from a previous deployment may take precedence
> in some BIOS
> - an UEFI boot partition left from a previous deployment is likely to
> confuse UEFI firmware
> - apparently ceph does not work correctly without cleaning (I'll defer to
> the storage team to comment)
>
> For these reasons we don't recommend having cleaning disabled, and I propose
> to re-enable it.
>
> It has the following drawbacks:
> - The default workflow will require another node boot, thus becoming several
> minutes longer (incl. the CI)
> - It will no longer be possible to easily restore a deleted overcloud node.

I'm trending towards -1, for these exact reasons you list as
drawbacks. There has been no shortage of occurrences of users who have
ended up with accidentally deleted overclouds. These are usually
caused by user error or unintended/unpredictable Heat operations.
Until we have a way to guarantee that Heat will never delete a node,
or Heat is entirely out of the picture for Ironic provisioning, then
I'd prefer that we didn't enable automated cleaning by default.

I believe we had done something with policy.json at one time to
prevent node delete, but I don't recall if that protected from both
user initiated actions and Heat actions. And even that was not enabled
by default.

IMO, we need to keep "safe" defaults. Even if it means manually
documenting that you should clean to prevent the issues you point out
above. The alternative is to have no way to recover deleted nodes by
default.




-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [designate] Meeting Times - change to office hours?

2018-04-25 Thread Graham Hayes
On 24/04/18 16:55, Ben Nemec wrote:
> I prefer 14:00 to 22:00 UTC, although depending on the time of year I
> may have some flexibility on that.
> 
> On 04/24/2018 01:37 AM, Erik Olof Gunnar Andersson wrote:
>> I can do anytime ranging from 16:00 UTC to 03:00 UTC, Mon-Fri, maybe
>> up to 07:00 UTC assuming that it's once bi-weekly.
>>
>> 
>> *From:* Jens Harbott 
>> *Sent:* Monday, April 23, 2018 10:49:25 PM
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [designate] Meeting Times - change to
>> office hours?
>> 2018-04-23 13:11 GMT+02:00 Graham Hayes :
>>> Hi All,
>>>
>>> We moved our meeting time to 14:00UTC on Wednesdays, but attendance
>>> has been low, and it is also the middle of the night for one of our
>>> cores.
>>>
>>> I would like to suggest we have an office hours style meeting, with
>>> one in the UTC evening and one in the UTC morning.
>>>
>>> If this seems reasonable - when and what frequency should we do
>>> them? What times suit the current set of contributors?
>>
>> My preferred range would be 06:00UTC-14:00UTC, Mon-Thu, though
>> extending a couple of hours in either direction might be possible for
>> me, too.
>>
>> If we do alternating times, with the current amount of work happening
>> we maybe could make each of them monthly, so we end up with a roughly
>> bi-weekly schedule.
>>
>> I also have a slight preference for continuing to use one of the
>> meeting channels as opposed to meeting in the designate channel, if
>> that is what "office hours style meeting" is meant to imply.
>>

I think a bi-weekly meeting, alternating between UTC morning and evening
is a good idea.

I do like the meeting channels, so I think we should keep them.

Thanks,

Graham



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier?

2018-04-25 Thread Jeremy Stanley
On 2018-04-25 14:12:00 +0800 (+0800), Rico Lin wrote:
[...]
> I believe to combine API services into one service will be able to
> scale much easier. As we already starting from providing multiple
> services and binding with Apache(Also concern about Zane's
> comment), we can start this goal by saying providing unified API
> service architecture (or start with new oslo api service). If we
> reduce the difference between implementation from API service in
> each OpenStack services first, maybe will make it easier to manage
> or upgrade (since we unfied the package requirements) and even
> possible to accelerate APIs.
[...]

How do you see this as being either similar to or different from the
https://git.openstack.org/cgit/openstack/oaktree/tree/README.rst
effort which is currently underway?
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Help needed in debugging issue - ClientException: Unable to update the attachment. (HTTP 500)

2018-04-25 Thread Matt Riedemann

On 4/25/2018 3:32 AM, Sreeram Vancheeswaran wrote:

Hi team!

We are currently facing  an issue in our out-of-tree driver nova-dpm [1] 
with nova and cinder on master, where instance launch in devstack is 
failing due to communication/time-out issues in nova-cinder.   We are 
unable to get to the root cause of the issue and we need your help on 
getting some hints/directions to debug this issue further.


--> From nova-compute service: BuildAbortException: Build of instance 
aborted: Unable to update the attachment. (HTTP 500) from the 
nova-compute server (detailed logs here [2]).


--> From cinder-volume service: ERROR oslo_messaging.rpc.server 
VolumeAttachmentNotFound: Volume attachment could not be found with 
filter: attachment_id = 266ef7e1-4735-40f1-b704-509472f565cb.
Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR 
oslo_messaging.rpc.server  (detailed logs here [3])


Debugging steps done so far:-

  * Compared the package versions between the current devstack under
test with the **last succeeding job in our CI system** (to be exact,
it was for the patches https://review.openstack.org/#/c/458514/ and
https://review.openstack.org/#/c/458820/); However the package
versions for packages such as sqlalchemy, os-brick, oslo* are
exactly the same in both the systems.
  * We used git bisect to revert nova and cinder projects to versions
equal to or before the date of our last succeeding CI run; but still
we were able to reproduce the same error.
  * Our guess is that the db "Save" operation during the update of
volume attachment is failing.  But we are unable to trace/debug to
that point in the rpc call;  Any suggestions on how to debug this
sceario would be really helpful.
  * We are running devstack master on Ubuntu 16.04.04


References

[1] https://github.com/openstack/nova-dpm


[2] Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.volume.cinder 
[None req-751d4586-cd97-4a8f-9423-f2bc4b1f1269 service nova] Update 
attachment failed for attachment 266ef7e1-4735-40f1-b704-509472f565cb. 
Error: Unable to update the attachment. (HTTP 500) (Request-ID: 
req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce) Code: 500: ClientException: 
Unable to update the attachment. (HTTP 500) (Request-ID: 
req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce)
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[None req-751d4586-cd97-4a8f-9423-f2bc4b1f1269 service nova] [instance: 
d761da60-7bb1-415e-b5b9-eaaed124d6d2] Instance failed block device 
setup: ClientException: Unable to update the attachment. (HTTP 500) 
(Request-ID: req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce)
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] Traceback (most recent 
call last):
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1577, in _prep_block_device
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] 
wait_func=self._await_block_device_map_created)
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 828, in 
attach_block_devices
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] _log_and_attach(device)
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 825, in _log_and_attach
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] 
bdm.attach(*attach_args, **attach_kwargs)
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 46, in wrapped
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] ret_val = 
method(obj, context, *args, **kwargs)
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 618, in attach
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] virt_driver, 
do_driver_attach)
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", 
line 274, in inner
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] return f(*args, 
**kwargs)
Apr 25 06:41:57 

[openstack-dev] [horizon][nova][cinder] Horizon support for multiattach volumes

2018-04-25 Thread Matt Riedemann
I wanted to advertise the need for some help in adding multiattach 
volume support to Horizon. There is a blueprint tracking the changes 
[1]. I started the ball rolling with [2] but there is more work to do, 
listed in the work items section of the blueprint.


[2] was I think my first real code contribution to Horizon and it wasn't 
terrible (thanks for Akihiro's patience), so I'm sure others could 
easily jump in here and slice this up if we have people looking for 
something to hack on.


[1] https://blueprints.launchpad.net/horizon/+spec/multi-attach-volume
[2] https://review.openstack.org/#/c/547856/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] ironic automated cleaning by default?

2018-04-25 Thread Dmitry Tantsur

Hi all,

I'd like to restart conversation on enabling node automated cleaning by default 
for the undercloud. This process wipes partitioning tables (optionally, all the 
data) from overcloud nodes each time they move to "available" state (i.e. on 
initial enrolling and after each tear down).


We have had it disabled for a few reasons:
- it was not possible to skip time-consuming wiping if data from disks
- the way our workflows used to work required going between manageable and 
available steps several times


However, having cleaning disabled has several issues:
- a configdrive left from a previous deployment may confuse cloud-init
- a bootable partition left from a previous deployment may take precedence in 
some BIOS
- an UEFI boot partition left from a previous deployment is likely to confuse 
UEFI firmware
- apparently ceph does not work correctly without cleaning (I'll defer to the 
storage team to comment)


For these reasons we don't recommend having cleaning disabled, and I propose to 
re-enable it.


It has the following drawbacks:
- The default workflow will require another node boot, thus becoming several 
minutes longer (incl. the CI)

- It will no longer be possible to easily restore a deleted overcloud node.

What do you think? If I don't hear principal objections, I'll prepare a patch in 
the coming days.


Dmitry

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Monthly bug day?

2018-04-25 Thread Ruby Loo
Hi Mike,

If we hold it, I'll (try to) be there :)

Thanks for spearheading this!

--ruby

On Mon, Apr 23, 2018 at 8:04 AM, Michael Turek 
wrote:

> Hey everyone!
>
> We had a bug day about two weeks ago and it went pretty well! At last
> week's IRC meeting the idea of having one every month was thrown around.
>
> What does everyone think about having Bug Day the first Thursday of every
> month?
>
> Thanks,
> Mike Turek
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [api] [lbaas] Neutron LBaaS V2 docs incompatibility

2018-04-25 Thread Artem Goncharov
Hi all,

after working with OpenStackSDK in my cloud I have found one difference in
the Neutron LBaaS (yes, I know it is deprecated, but it is still used). The
fix would be small and fast, unfortunately I have faced problems with the
API description:
- https://wiki.openstack.org/wiki/Neutron/LBaaS/API_2.0#Pools describes,
that the LB pool has *healthmonitor_id* attribute (what eventually also
fits reality of my cloud)
- https://developer.openstack.org/api-ref/network/v2/index.html#pools
(which
is referred to from the previous link in the deprecation note) describes,
that the LB pool has *healthmonitors* (and *healthmonitors_status*) as list
of IDs. Basically in this regards it is same as
https://wiki.openstack.org/wiki/Neutron/LBaaS/API_1.0#Pool description
- unfortunately even
https://github.com/openstack/neutron-lib/blob/master/api-ref/source/v2/lbaas-v2.inc
describes
*Pool.healthmonitors* (however it also contains
https://github.com/openstack/neutron-lib/blob/master/api-ref/source/v2/samples/lbaas/pools-list-response2.json
sample
with the *Pool.healthmonitor_id*)
- OpenStackSDK contains *network.pool.health_monitors* (with underscore)

I want to bring this all in an order and enable managing of the
loadbalancer through OSC for my OpenStack cloud, but I can't figure out
what is the correct behavior here.

Can anybody, please, help in figuring out the truth here?

Thanks,
Artem
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [octavia] Sometimes amphoras are not re-created if they are not reached for more than heartbeat_timeout

2018-04-25 Thread mihaela.balas
Hello,

I am testing Octavia Queens and I see that the failover behavior is very much 
different than the one in Ocata (this is the version we are currently running 
in production).
One example of such behavior is:

I create 4 load balancers and after the creation is successful, I shut off all 
the 8 amphoras. Sometimes, even the health-manager agent does not reach the 
amphoras, they are not deleted and re-created. The logs look like shown below 
even when the heartbeat timeout is long passed. Sometimes the amphoras are 
deleted and re-created. Sometimes,  they are partially re-created - part of 
them remain in shut off.
Heartbeat_timeout is set to 60 seconds.



[octavia-health-manager-3662231220-nxnt3] 2018-04-25 10:57:26.244 11 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver 
[req-339b54a7-ab0c-422a-832f-a444cd710497 - a5f15235c0714365b98a50a11ec956e7 - 
- -] Could not connect to instance. Retrying.: ConnectionError: 
HTTPSConnectionPool(host='192.168.0.15', port=9443): Max retries exceeded with 
url: 
/0.5/listeners/285ad342-5582-423e-b654-1f0b50d91fb2/certificates/octaviasrv2.orange.com.pem
 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No 
route to host',))
[octavia-health-manager-3662231220-3lssd] 2018-04-25 10:57:26.464 13 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver 
[req-a63b795a-4b4f-4b90-a201-a4c9f49ac68b - a5f15235c0714365b98a50a11ec956e7 - 
- -] Could not connect to instance. Retrying.: ConnectionError: 
HTTPSConnectionPool(host='192.168.0.14', port=9443): Max retries exceeded with 
url: 
/0.5/listeners/a45bdef3-e7da-4a18-9f1f-53d5651efe0f/1615c1ec-249e-4fa8-9d73-2397e281712c/haproxy
 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No 
route to host',))
[octavia-health-manager-3662231220-nxnt3] 2018-04-25 10:57:27.772 11 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver 
[req-10febb10-85ea-4082-9df7-daa48894b004 - a5f15235c0714365b98a50a11ec956e7 - 
- -] Could not connect to instance. Retrying.: ConnectionError: 
HTTPSConnectionPool(host='192.168.0.19', port=9443): Max retries exceeded with 
url: 
/0.5/listeners/96ce5862-d944-46cb-8809-e1e328268a66/fc5b7940-3527-4e9b-b93f-1da3957a5b71/haproxy
 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No 
route to host',))
[octavia-health-manager-3662231220-nxnt3] 2018-04-25 10:57:34.252 11 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver 
[req-339b54a7-ab0c-422a-832f-a444cd710497 - a5f15235c0714365b98a50a11ec956e7 - 
- -] Could not connect to instance. Retrying.: ConnectionError: 
HTTPSConnectionPool(host='192.168.0.15', port=9443): Max retries exceeded with 
url: 
/0.5/listeners/285ad342-5582-423e-b654-1f0b50d91fb2/certificates/octaviasrv2.orange.com.pem
 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No 
route to host',))
[octavia-health-manager-3662231220-3lssd] 2018-04-25 10:57:34.476 13 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver 
[req-a63b795a-4b4f-4b90-a201-a4c9f49ac68b - a5f15235c0714365b98a50a11ec956e7 - 
- -] Could not connect to instance. Retrying.: ConnectionError: 
HTTPSConnectionPool(host='192.168.0.14', port=9443): Max retries exceeded with 
url: 
/0.5/listeners/a45bdef3-e7da-4a18-9f1f-53d5651efe0f/1615c1ec-249e-4fa8-9d73-2397e281712c/haproxy
 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No 
route to host',))
[octavia-health-manager-3662231220-nxnt3] 2018-04-25 10:57:35.780 11 WARNING 
octavia.amphorae.drivers.haproxy.rest_api_driver 
[req-10febb10-85ea-4082-9df7-daa48894b004 - a5f15235c0714365b98a50a11ec956e7 - 
- -] Could not connect to instance. Retrying.: ConnectionError: 
HTTPSConnectionPool(host='192.168.0.19', port=9443): Max retries exceeded with 
url: 
/0.5/listeners/96ce5862-d944-46cb-8809-e1e328268a66/fc5b7940-3527-4e9b-b93f-1da3957a5b71/haproxy
 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No 
route to host',))

Thank you,
Mihaela Balas

_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages 

[openstack-dev] [Blazar] Next IRC meeting is canceled

2018-04-25 Thread Masahito MUROI

Hi Blazar folks,

As we discussed in the last meeting, the next weekly meeting is canceled 
because most of members are out of town next week.


The next regular meeting is on 8th May.

best regards,
Masahito


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Proposing Mohammed Naser as core reviewer

2018-04-25 Thread Markos Chandras
On 24/04/18 16:05, Jean-Philippe Evrard wrote:
> Hi everyone,
> 
> I’d like to propose Mohammed Naser [1] as a core reviewer for 
> OpenStack-Ansible.
> 

+2

-- 
markos

SUSE LINUX GmbH | GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg) Maxfeldstr. 5, D-90409, Nürnberg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Help needed in debugging issue - ClientException: Unable to update the attachment. (HTTP 500)

2018-04-25 Thread Sreeram Vancheeswaran

Hi team!

We are currently facing  an issue in our out-of-tree driver nova-dpm [1] 
with nova and cinder on master, where instance launch in devstack is 
failing due to communication/time-out issues in nova-cinder.   We are 
unable to get to the root cause of the issue and we need your help on 
getting some hints/directions to debug this issue further.


--> From nova-compute service: BuildAbortException: Build of instance 
aborted: Unable to update the attachment. (HTTP 500) from the 
nova-compute server (detailed logs here [2]).


--> From cinder-volume service: ERROR oslo_messaging.rpc.server 
VolumeAttachmentNotFound: Volume attachment could not be found with 
filter: attachment_id = 266ef7e1-4735-40f1-b704-509472f565cb.
Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR 
oslo_messaging.rpc.server  (detailed logs here [3])


Debugging steps done so far:-

 * Compared the package versions between the current devstack under
   test with the **last succeeding job in our CI system** (to be exact,
   it was for the patches https://review.openstack.org/#/c/458514/ and
   https://review.openstack.org/#/c/458820/); However the package
   versions for packages such as sqlalchemy, os-brick, oslo* are
   exactly the same in both the systems.
 * We used git bisect to revert nova and cinder projects to versions
   equal to or before the date of our last succeeding CI run; but still
   we were able to reproduce the same error.
 * Our guess is that the db "Save" operation during the update of
   volume attachment is failing.  But we are unable to trace/debug to
   that point in the rpc call;  Any suggestions on how to debug this
   sceario would be really helpful.
 * We are running devstack master on Ubuntu 16.04.04


References

[1] https://github.com/openstack/nova-dpm


[2] Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.volume.cinder 
[None req-751d4586-cd97-4a8f-9423-f2bc4b1f1269 service nova] Update 
attachment failed for attachment 266ef7e1-4735-40f1-b704-509472f565cb. 
Error: Unable to update the attachment. (HTTP 500) (Request-ID: 
req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce) Code: 500: ClientException: 
Unable to update the attachment. (HTTP 500) (Request-ID: 
req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce)
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[None req-751d4586-cd97-4a8f-9423-f2bc4b1f1269 service nova] [instance: 
d761da60-7bb1-415e-b5b9-eaaed124d6d2] Instance failed block device 
setup: ClientException: Unable to update the attachment. (HTTP 500) 
(Request-ID: req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce)
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] Traceback (most recent 
call last):
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1577, in _prep_block_device
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] 
wait_func=self._await_block_device_map_created)
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 828, in 
attach_block_devices
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] _log_and_attach(device)
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 825, in _log_and_attach
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] 
bdm.attach(*attach_args, **attach_kwargs)
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 46, in wrapped
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] ret_val = 
method(obj, context, *args, **kwargs)
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 618, in attach
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] virt_driver, 
do_driver_attach)
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", 
line 274, in inner
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] return f(*args, 
**kwargs)
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: 

Re: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier?

2018-04-25 Thread Rico Lin
2018-04-25 0:04 GMT+08:00 Fox, Kevin M :
>
> Yeah, I agree k8s seems to have hit on a good model where interests are
separately grouped from the code bases. This has allowed architecture to
not to be too heavily influenced by the logical groups interest.
>
> I'll go ahead and propose it again since its been a little while. In
order to start breaking down the barriers between Projects and start
working more towards integration, should the TC come up with an
architecture group? Get folks from all the major projects involved in it
and sharing common infrastructure.
>
> One possible pie in the sky goal of that group could be the following:
>
> k8s has many controllers. But they compile almost all of them into one
service. the kube-apiserver. Architecturally they could break them out at
any point, but so far they have been able to scale just fine without doing
so. Having them combined has allowed much easier upgrade paths for users
though. This has spurred adoption and contribution. Adding a new controller
isn't a huge lift to an operator. they just upgrade to the newest version
which has the new controller built in.
>

I believe to combine API services into one service will be able to scale
much easier. As we already starting from providing multiple services and
binding with Apache(Also concern about Zane's comment), we can start this
goal by saying providing unified API service architecture (or start with
new oslo api service). If we reduce the difference between implementation
from API service in each OpenStack services first, maybe will make it
easier to manage or upgrade (since we unfied the package requirements) and
even possible to accelerate APIs.

> Could the major components, nova-api, neutron-server, glance-apiserver,
etc be built in a way to have 1 process for all of them, and combine the
upgrade steps such that there is also, one db-sync for the entire
constellation?
>

I like Zane's idea of combining services in Compute Node.

> The idea would be to take Constellations idea one step farther. That the
Projects would deliver python libraries(and a binary for stand alone
operation). Constilations would actually provide a code deliverable, not
just reference architecture, combining the libraries together into a single
usable entity. Operators most likely would consume the Constilations
version rather then the individual Project versions.
>
> What do you think?

It won't hurt when we providing unified OpenStack command (and it's
actually a great stuff), and it should not break anything for API. Maybe
just one more API service call OpenStack API service and it base on teams
to said providing plugin or not. I think we will eventually reach the goal
in this way.

>
> Thanks,
> Kevin--
May The Force of OpenStack Be With You,

*Rico Lin*irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Help needed in debugging issue - ClientException: Unable to update the attachment. (HTTP 500)

2018-04-25 Thread Sreeram Vancheeswaran

Hi team!

We are currently facing  an issue in our out-of-tree driver nova-dpm [1] 
with nova and cinder on master, where instance launch in devstack is 
failing due to communication/time-out issues in nova-cinder.   We are 
unable to get to the root cause of the issue and we need your help on 
getting some hints/directions to debug this issue further.


--> From nova-compute service: BuildAbortException: Build of instance 
aborted: Unable to update the attachment. (HTTP 500) from the 
nova-compute server (detailed logs here [2]).


--> From cinder-volume service: ERROR oslo_messaging.rpc.server 
VolumeAttachmentNotFound: Volume attachment could not be found with 
filter: attachment_id = 266ef7e1-4735-40f1-b704-509472f565cb.
Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR 
oslo_messaging.rpc.server  (detailed logs here [3])


Debugging steps done so far:-

 * Compared the package versions between the current devstack under
   test with the **last succeeding job in our CI system** (to be exact,
   it was for the patches https://review.openstack.org/#/c/458514/ and
   https://review.openstack.org/#/c/458820/); However the package
   versions for packages such as sqlalchemy, os-brick, oslo* are
   exactly the same in both the systems.
 * We used git bisect to revert nova and cinder projects to versions
   equal to or before the date of our last succeeding CI run; but still
   we were able to reproduce the same error.
 * Our guess is that the db "Save" operation during the update of
   volume attachment is failing.  But we are unable to trace/debug to
   that point in the rpc call;  Any suggestions on how to debug this
   sceario would be really helpful.
 * We are running devstack master on Ubuntu 16.04.04


References

[1] https://github.com/openstack/nova-dpm


[2] Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.volume.cinder 
[None req-751d4586-cd97-4a8f-9423-f2bc4b1f1269 service nova] Update 
attachment failed for attachment 266ef7e1-4735-40f1-b704-509472f565cb. 
Error: Unable to update the attachment. (HTTP 500) (Request-ID: 
req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce) Code: 500: ClientException: 
Unable to update the attachment. (HTTP 500) (Request-ID: 
req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce)
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[None req-751d4586-cd97-4a8f-9423-f2bc4b1f1269 service nova] [instance: 
d761da60-7bb1-415e-b5b9-eaaed124d6d2] Instance failed block device 
setup: ClientException: Unable to update the attachment. (HTTP 500) 
(Request-ID: req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce)
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] Traceback (most recent 
call last):
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File 
"/opt/stack/nova/nova/compute/manager.py", line 1577, in _prep_block_device
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] 
wait_func=self._await_block_device_map_created)
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 828, in 
attach_block_devices
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] _log_and_attach(device)
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 825, in _log_and_attach
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] 
bdm.attach(*attach_args, **attach_kwargs)
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 46, in wrapped
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] ret_val = 
method(obj, context, *args, **kwargs)
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File 
"/opt/stack/nova/nova/virt/block_device.py", line 618, in attach
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] virt_driver, 
do_driver_attach)
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File 
"/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", 
line 274, in inner
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager 
[instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] return f(*args, 
**kwargs)
Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager