Re: [openstack-dev] [nova] Do not recheck changes until 422709 is merged

2017-01-19 Thread Ghanshyam Mann
For other failure on "expected_attr" error, we need this to merge - 
https://review.openstack.org/#/c/422323/  on nova side. 

-gmann


> -Original Message-
> From: Matt Riedemann [mailto:mriede...@gmail.com]
> Sent: 20 January 2017 12:17
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [nova] Do not recheck changes until 422709 is
> merged
> 
> On 1/19/2017 5:09 PM, Matt Riedemann wrote:
> > On 1/19/2017 10:56 AM, Matt Riedemann wrote:
> >> The py35 unit test job is broken for Nova until this patch is merged:
> >>
> >> https://review.openstack.org/#/c/422709/
> >>
> >> So please hold off on the rechecks until that happens.
> >>
> >
> > We're good to go again for rechecks.
> >
> 
> Just a heads up that if you still see py35 unit test failures with a 
> TypeError like
> this [1] then you need to rebase you patch.
> 
> [1]
> http://logs.openstack.org/37/410737/5/check/gate-nova-python35-
> db/4fec66c/console.html#_2017-01-20_01_56_21_021702
> 
> --
> 
> Thanks,
> 
> Matt Riedemann
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ python-novaclient][ python-glanceclient][ python-cinderclient][ python-neutronclient] Remove x-openstack-request-id logging code as it is logged twice

2017-01-19 Thread Kekane, Abhishek
Hi Devs,

In the latest keystoneauth1 version 2.18.0, x-openstack-request-id is logged 
for every HTTP response. This keystoneauth1 version will be used for ocata.
The same request id is also logged in 'request' method of SessionClient class 
for python-novaclient, python-glanceclient, python-cinderclient and 
python-neutronclient. Once requirements.txt is synced with global-requirements 
and it uses keystoneauth1 version 2.18.0 and above, x-openstack-request-id will 
be logged twice for these clients.

I have submitted patches for python-novaclient [1] and python-glanceclient [2] 
and created patches for python-cinderclient and python-neutronclient but same 
will not be reviewed unless and until the requirements.txt is synced with 
global-requirements and it uses keystoneauth1 version 2.18.0.

As final releases for client libraries are scheduled in the next week (between 
Jan 23 - Jan 27) we want to address these issues in the above mentioned clients.

Please let us know your opinion about the same.

[1] https://review.openstack.org/422602
[2] https://review.openstack.org/422591

__
Disclaimer: This email and any attachments are sent in strictest confidence
for the sole use of the addressee and may contain legally privileged,
confidential, and proprietary data. If you are not the intended recipient,
please advise the sender by replying promptly to this email and then delete
and destroy this email and any attachments without any further use, copying
or forwarding.__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder]Can we run cinder-volume and cinder-backup on a same host?

2017-01-19 Thread Rikimaru Honjo

Hi Cinder devs,

I have a question about cinder.
Can I run cinder-volume and cinder-backup on a same host when I using iscsi 
backend?

I afraid that iscsi operations will be conflicted between cinder-volume and 
cinder-backup.
In my understanding, iscsi operations are serialized for each individual 
process.
But these could be raced between processes.

e.g.(Caution: This is just a forecast.)
If cinder-backup execute "multipath -r" while cinder-volume is terminating 
connection,
a multipath garbage will remain unexpectedly.
--
Rikimaru Honjo
E-mail:honjo.rikim...@po.ntts.co.jp



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-sfc] Does SFC support chaining of Layer 2 devices?

2017-01-19 Thread Vikash Kumar
On Thu, Jan 19, 2017 at 12:18 PM, Vikash Kumar <
vikash.ku...@oneconvergence.com> wrote:

> All,
>
>I am exploring SFC for chaining an IDS device (strictly in L2 mode). As
> of now, it looks SFC default supports only L3 devices. SFC APIs doesn't
> have any way to specify the nature of device and without that, it seems
> there is no way an operator can spin any device/VNF except L3 mode VNFs. Is
> anything I am missing here ? Can one still spin a L2 IDS with SFC ?
>
>
> --
> Regards,
> Vikash
>



-- 
Regards,
Vikash
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Horizon] Weekly wrap-up

2017-01-19 Thread Richard Jones
Hi folks,

We had a relatively brief Horizon meeting this week[1] in which I
announced the (planned) Feature Freeze that took place this week. This
means that all blueprint-related patches are now put on hold until
after the Ocata release of Horizon is done. In the mean time, only bug
patches and those few patches granted a Feature Freeze Exemption (FFE)
will be merged.

The patches granted FFE should be reviewed with the highest priority
where possible:

Keystone to Keystone Federation Drop Down
  https://review.openstack.org/#/c/408435/
Simple tenant usage pagination
  https://review.openstack.org/#/c/410337/
Add version 2.40 to the supported compute API versions
  https://review.openstack.org/#/c/422642/
Properly compare versions in APIVersionManager
  https://review.openstack.org/#/c/410688/

I will be cutting the milestone 3 release next week; the impact of
that is described on the Ocata release schedule page[2].


 Richard

[1] 
http://eavesdrop.openstack.org/meetings/horizon/2017/horizon.2017-01-18-20.03.html
[2] https://releases.openstack.org/ocata/schedule.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do not recheck changes until 422709 is merged

2017-01-19 Thread Matt Riedemann

On 1/19/2017 5:09 PM, Matt Riedemann wrote:

On 1/19/2017 10:56 AM, Matt Riedemann wrote:

The py35 unit test job is broken for Nova until this patch is merged:

https://review.openstack.org/#/c/422709/

So please hold off on the rechecks until that happens.



We're good to go again for rechecks.



Just a heads up that if you still see py35 unit test failures with a 
TypeError like this [1] then you need to rebase you patch.


[1] 
http://logs.openstack.org/37/410737/5/check/gate-nova-python35-db/4fec66c/console.html#_2017-01-20_01_56_21_021702


--

Thanks,

Matt Riedemann

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [nova] [glance] [oslo] webob 1.7

2017-01-19 Thread Corey Bryant
On Thu, Jan 19, 2017 at 8:29 PM, Joshua Harlow 
wrote:

> Corey Bryant wrote:
>
>>
>> Added [nova] and [oslo] to the subject.  This is also affecting nova and
>> oslo.middleware.  I know Sean's initial response on the thread was that
>> this shouldn't be a priority for ocata but we're completely blocked by
>> it.  Would those teams be able to prioritize a fix for this?
>>
>>
> Is this the issue for that https://github.com/Pylons/webob/issues/307 ?
>
>
Yes, at least for glance that is part of the issue, the dropping of the
http_method_probably_has_body check.


> If so, then perhaps we need to comment and work together on that and
> introduce a fix into webob? Would that be the correct path here? What
> otherwise would be needed to 'prioritize a fix' for it?
>
>
That doesn't appear to be a bug in webob from what I can see in the issue
307 discussion, just a change of behavior that various projects need to
adapt to if they're going to support webob 1.7.x.

-- 
Regards,
Corey
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] PTL non-candidacy

2017-01-19 Thread zhu.fanglei
Yes really nice and hard work:)

Though because of the reason of different time zone, I don't often have the 
chance to talk with Kenichi, but I am impressed 

by his working time, which is almost spreaded to 24 hours:)

http://stackalytics.com/report/users/oomichi






Original Mail



Sender:  <ghanshyamm...@gmail.com>
To:  <openstack-dev@lists.openstack.org>
Date: 2017/01/20 08:46
Subject: Re: [openstack-dev] [qa] PTL non-candidacy







Thanks Kenichi for all your hardwork and nice leadership.


Its been very good and lot of things we finished under your leadership. You 
managed all the activities in a very cool and polite  way (Japanese style :)).


-gmann




On Fri, Jan 20, 2017 at 4:16 AM, Ken'ichi Ohmichi <ken1ohmi...@gmail.com> wrote:
Hi,
 
 I will step down as PTL after this Ocata cycle.
 I was happy to see new ideas and folks who try making ideas true in
 this 2 cycles.
 Now QA project has a lot of components with many people's effort and
 we help each other as a community.
 This experience is very exciting for me, I am proud to being a member
 in this community.
 
 Today, I'd like to concentrate on coding and reviewing again as a developer.
 I think we have good candidates for a next PTL, and I will keep active
 under the next PTL's leadership.
 
 Thanks for choosing me anyways, let's make OpenStack quality better together 
:-)
 
 Thanks
 Ken Ohmichi
 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Improving Vendor Driver Discoverability

2017-01-19 Thread Mike Perez
On 17:38 Jan 18, Morales, Victor wrote:
> Just a FYI, Ankur have been working on have a Feature Classification Matrix
> in Neutron[1] which collects some of this information
> 
> [1] https://review.openstack.org/#/c/318192/

I actually didn't know Nova also generated this with a script and ini file.
Perhaps this would be a better approach than a giant JSON file like driver log
is today. I could then have the marketplace parse these ini files using the
common script. What do others think?

-- 
Mike Perez


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [nova] [glance] [oslo] webob 1.7

2017-01-19 Thread Joshua Harlow

Corey Bryant wrote:


Added [nova] and [oslo] to the subject.  This is also affecting nova and
oslo.middleware.  I know Sean's initial response on the thread was that
this shouldn't be a priority for ocata but we're completely blocked by
it.  Would those teams be able to prioritize a fix for this?



Is this the issue for that https://github.com/Pylons/webob/issues/307 ?

If so, then perhaps we need to comment and work together on that and 
introduce a fix into webob? Would that be the correct path here? What 
otherwise would be needed to 'prioritize a fix' for it?


-Josh

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-19 Thread Joshua Harlow

Embrace the larger world instead of trying to recreate parts of it,
create alliances with the CNCF and/or other companies


The CNCF isn't a company...


Yes, fair, good point, thanks for the correction.




that are getting actively involved there and make bets that solutions
there are things that people want to use directly (instead of turning
openstack into some kind of 'integration aka, middleware engine').


The complaint about Barbican that I heard from most folks on this thread
was that it added yet another service to deploy to an OpenStack deployment.

If we use technology from the CNCF or elsewhere, we're still going to
end up deploying yet another service. Just like if we want to use
ZooKeeper for group membership instead of the Nova DB.

So, while I applaud the general idea of looking at the CNCF projects as
solutions to some problems, you wouldn't be solving the actual issue
brought to attention by operators and OpenStack project contributors (to
Magnum/Craton): of needing to install yet another dependency.



How many folks have been watching
https://github.com/cncf/toc/tree/master/proposals or
https://github.com/cncf/toc/pulls?


I don't look at that feed, but I do monitor the pull requests for k8s
and some other projects like rkt and ACI/OCI specs.


Great!

Thanks, we have to avoid becoming siloed off into 
'openstack-world/universe'; I'm pretty sure it's the only way we survive.





Start accepting that what we call OpenStack may be better off as
extracting the *current* good parts of OpenStack and cutting off some of
the parts that aren't really worth it/nobody really uses/deploys anyway


I'm curious what you think would be left in OpenStack?


A good question, and one I've been pondering on for a while.

Honestly I'm not sure I could say what would be 'left', especially as 
there is overlap in functionality, but let's say we are proactive in say 
shifting things (in places where we are actually more advanced or 
provide unique value that the CNCF and its projects lack) then what did 
not shift over is what is left in openstack (as it is defined today). 
But what is wrong with that, if openstack becomes openstack-CNCF, so 
what, it feels somewhat evolutionary and I get the gut feeling it's 
happening whether we want it to or not (though of course I only have a 
small view on the wider world).




BTW, the CNCF is already creating projects that duplicate functionality
that's been available for years in other open source projects -- see
prometheus and fluentd [1] -- in the guise of "unifying" things for a
cloud-native world. I suspect that trend will continue as vendors jump
from OpenStack to CNCF projects because they perceive it as the new
shiny thing and capable of accepting their vendor-specific code quicker
than OpenStack.


Perhaps that's an opportunity, not a drawback? If we play the cards 
right and approach this correctly we can help evolve (our community and 
theres) into something that transfers what we have learned from our 
current community to whatever it becomes next.




In fact, if you look at the CNCF projects, you see the exact same
disagreement about the exact same two areas that we see so much
duplication in the OpenStack community: deployment/installation and
logging/monitoring/metrics. I mean, how many ways are there to deploy
k8s at this point?


Ya, I'm not really happy either with this, but I've seen it before.



The things that the OpenStack ecosystem has proliferated as services or
plugins are the exact same things that the CNCF projects are building
into their architecture. How many service discovery mechanisms can
Prometheus use? How many source and destination backends can fluentd
support? And now with certain vendors trying to get more
hardware-specific functionality added to k8s [2], the k8s community is
going through the exact same inflection point with regards to project
scope that Nova did 4 years ago.

What's old is new and what's new is old again.

 > (and say starting to modernize the parts that are left by say moving
 > them under the CNCF umbrella and adopting some of the technology there
 > instead).

I find it curious that you equate "modernizing" an OpenStack project to
"moving it to the CNCF umbrella". Is the technology you would want to
adopt just simply Golang vs. Python or are you referring to something
else? Perhaps k8s' choice not to use a relational database for any state
storage?

Look, I'm not saying the CNCF projects are bad in any way. I have
*daily* feelings of jealousy when looking at some of the k8s and fluentd
architecture/code and wonder what would Nova look like if we'd started
coding it now from scratch. Almost weekly I wish that Nova would have
the freedom that k8s currently has to change direction mid-stream. But
k8s is also in a different maturity/lifecycle place than Nova is and has
a very different and smaller mission.



I guess I call what you are feeling above as modernizing.


My point is this: I don't want people 

Re: [openstack-dev] [qa] PTL non-candidacy

2017-01-19 Thread Andrea Frittoli
Thank you Ken for the great work and for your leadership!

I'm really glad you'll stay in the team and keep contributing.

On Fri, Jan 20, 2017 at 12:41 AM Ghanshyam Mann 
wrote:

> Thanks Kenichi for all your hardwork and nice leadership.
>
> Its been very good and lot of things we finished under your leadership.
> You managed all the activities in a very cool and polite  way (Japanese
> style :)).
>
> ​-gmann
>
> On Fri, Jan 20, 2017 at 4:16 AM, Ken'ichi Ohmichi 
> wrote:
>
> Hi,
>
> I will step down as PTL after this Ocata cycle.
> I was happy to see new ideas and folks who try making ideas true in
> this 2 cycles.
> Now QA project has a lot of components with many people's effort and
> we help each other as a community.
> This experience is very exciting for me, I am proud to being a member
> in this community.
>
> Today, I'd like to concentrate on coding and reviewing again as a
> developer.
> I think we have good candidates for a next PTL, and I will keep active
> under the next PTL's leadership.
>
> Thanks for choosing me anyways, let's make OpenStack quality better
> together :-)
>
> Thanks
> Ken Ohmichi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] PTL Candidacy for Pike

2017-01-19 Thread Andrea Frittoli
Dear all,

I’d like to announce my candidacy for PTL of the QA Program for the Pike
cycle.

I started working with OpenStack towards the end of 2011. Since 2014 I’ve
been
a core developer for Tempest.
I’ve always aimed for Tempest to be able to run against any OpenStack cloud;
a lot of my contributions to Tempest have been driven by that.
I’ve worked on QA for the OpenStack community, for an OpenStack based public
cloud as well as for an OpenStack distribution.

I believe that quality engineers should develop innovative, high quality
open-source tools and tests.

The OpenStack community has built an amazing set of tools and services to
handle quality engineering at such a large scale.
The number of tests executed, the test infrastructure and amount of test
data
produced can still be difficult to handle.
Complexity can inhibit new contributors as well as existing ones, not only
for
the QA program but for OpenStack in general as well.

If elected, in the Pike cycle I would like to focus on two areas.

- QA team support to the broader OpenStack community
- Finish the work on Tempest stable interfaces for plugins and support
  existing plugins in the transition
- Keep an open channel with the broader community when setting
priorities

- Promote contribution to the QA program, by:
- removing cruft from Tempest code
- making it easier to know “what’s going on” when a test job fails
- focus on tools that help triage and debug gate failures (OpenStack
  Health, Stackviz)
- leverage the huge amount of test data we produce every day to
  automate as much as possible the failure triage and issue
discovery
  processes

I hold the QA crew in great esteem, and I would be honoured to serve as the
next PTL.

Thank you

Andrea Frittoli (andreaf)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] PTL non-candidacy

2017-01-19 Thread Ghanshyam Mann
Thanks Kenichi for all your hardwork and nice leadership.

Its been very good and lot of things we finished under your leadership. You
managed all the activities in a very cool and polite  way (Japanese style
:)).

​-gmann

On Fri, Jan 20, 2017 at 4:16 AM, Ken'ichi Ohmichi 
wrote:

> Hi,
>
> I will step down as PTL after this Ocata cycle.
> I was happy to see new ideas and folks who try making ideas true in
> this 2 cycles.
> Now QA project has a lot of components with many people's effort and
> we help each other as a community.
> This experience is very exciting for me, I am proud to being a member
> in this community.
>
> Today, I'd like to concentrate on coding and reviewing again as a
> developer.
> I think we have good candidates for a next PTL, and I will keep active
> under the next PTL's leadership.
>
> Thanks for choosing me anyways, let's make OpenStack quality better
> together :-)
>
> Thanks
> Ken Ohmichi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Nominating mkarpin for core for the Puppet OpenStack modules

2017-01-19 Thread Emilien Macchi
On Thu, Jan 19, 2017 at 5:25 PM, Alex Schultz  wrote:
> Hey Puppet Cores,
>
> I would like to nominate Mykyta Karpin as a Core reviewer for the
> Puppet OpenStack modules.  He has been providing quality patches and
> reviews for some time now and I believe he would be a good addition to
> the team.  His stats for the last 90 days can be viewed here[0]
>
> Please response with your +1 or any objections. If there are no
> objections by Jan 26, I will add him to the core list.

+1, that's well deserved.
Thanks Mykyta for your contributions! Keep rocking :-)

> Thanks,
> -Alex
>
> [0] http://stackalytics.com/report/contribution/puppet%20openstack-group/90
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] feature freeze exception request -- nova simple tenant usages api pagination

2017-01-19 Thread Richard Jones
FFE granted for the three patches. We need to support that nova API change.

On 20 January 2017 at 01:28, Radomir Dopieralski  wrote:
> I would like to request a feature freeze exception for the following patch:
>
> https://review.openstack.org/#/c/410337
>
> This patch adds support for retrieving the simple tenant usages from Nova in
> chunks, and it is necessary for correct data given that related patches have
> been already merged in Nova. Without
> it, the data received will be truncated.
>
> In order to actually use that patch, however, it is necessary to set the
> Nova API version to at least
> version 3.40. For this, it's necessary to also add this patch:
>
> https://review.openstack.org/422642
>
> However, that patch will not work, because of a bug in the VersionManager,
> which for some reason
> uses floating point numbers for specifying versions, and thus understands
> 2.40 as 2.4. To fix that, it
> is also necessary to merge this patch:
>
> https://review.openstack.org/#/c/410688
>
> I would like to request an exception for all those three patches.
>
> An alternative to this would be to finish and merge the microversion
> support, and modify the first patch to make use of it. Then we would need
> exceptions for those two patches.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Defining Custom Deployment Networks

2017-01-19 Thread Dan Sneddon
I would like to call attention to two patches which Steven Hardy
proposed for Ocata to add the ability to create custom deployment
networks [1] [2]. This would allow the use of networks other than the
built-in 6 networks. These have gotten a little attention, and a couple
of alternative methods were proposed.

I would like to get this hashed out in time for the custom networks to
land in Ocata. This is going to be a dependency for much of the network
development that is planned for Pike, and I think it would be a huge
benefit to users of TripleO who plan to deploy Ocata.

So far there has been a concern raised about where to store the network
data (Mistral, Heat, Swift, ???), and we need some clarification and
discussion on that point. Another concern was raised about using j2 for
the template format. If people could take a moment to look at these
short reviews and chime in, that will help us move toward a consensus
approach.

[1] - https://review.openstack.org/#/c/409920

[2] - https://review.openstack.org/#/c/409921
-- 
Dan Sneddon |  Senior Principal OpenStack Engineer
dsned...@redhat.com |  redhat.com/openstack
dsneddon:irc|  @dxs:twitter

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] (dis)Continuation of Neutron VPNaaS

2017-01-19 Thread Armando M.
On 19 January 2017 at 13:41, Bruno L  wrote:

> Hi,
>
> November last year the Neutron team has announced that VPN as a Service
> will be no longer part of Neutron[1].
>
> We run a public cloud based in New Zealand called Catalyst Cloud[2]. Our
> customers find the VPN service extremely useful to integrate their cloud
> tenant's with on-premise infrastructure or even other clouds. We have
> almost one hundred VPNs that were established by customers using it.
>
> While customers could run a compute instance with something like VyOS,
> they are used to the convenience of having a service managed by us that is
> easy to consume via the APIs or dashboard. It would be a step back for us
> to discontinue VPNaaS.
>
> As a result, we are interested in picking up the development of VPNaaS and
> keeping it alive. If like us, you are an organisation that sees value in
> VPNaaS, please get in touch with me to discuss how we can collaborate on it.
>
> As a first step, we would like to ensure that it continue to pass CI and
> it is free of major bugs. Then, we would like to address some of the points
> raised in the VPNaaS scorecard[3] to bring it up to standard with other
> Neutron services. We don't envisage introducing new features during this
> period, but rather focus on stability and maturity.
>
> Could someone from the Neutron team please help us with the questions
> below?
> 1) What would be the process to transfer ownership of the project?
>

Hi Bruno,

That's great to hear. If you have dev resources who are ready to jump in
Gerrit, please point me to their IRCs and Gerrit accounts and I am happy to
engage with them directly. Yamamoto and I have still core rights on the
repo and work on pushing fixes on an occasional basis. Once your devs feel
more confident, we can definitely talk about adding them to the
neutron-vpnaas core team.


> 2) Until we bring it up to standard, would we need to maintain it as a
> separate project, or as part of Neutron?
>

My suggestion is to focus on the technical aspect of things before worrying
about the governance change. Those typically can happen only in certain
time windows of the release and with the Ocata release approaching feature
freeze, we definitely need to postpone the governance discussion until Pike
opens up.

Thanks,
Armando (irc:armax)


> Cheers,
> Bruno
>
> [1] http://lists.openstack.org/pipermail/openstack-dev/2016-
> November/107384.html
> [2] http://catalyst.net.nz/catalyst-cloud
> [3] http://specs.openstack.org/openstack/neutron-specs/specs
> /stadium/ocata/neutron-vpnaas.html
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Feature-related changes that need a final +2 (1/18)

2017-01-19 Thread Matt Riedemann

On 1/18/2017 4:43 PM, Matt Riedemann wrote:

Just to bring this to the awareness of other reviewers, here is a list
of blueprint-related patches that have a +2 and need a final push:

1.
https://blueprints.launchpad.net/nova/+spec/ironic-plug-unplug-vifs-update

https://review.openstack.org/#/c/364413/ - don't miss the bug fix change
below it.

2. https://blueprints.launchpad.net/nova/+spec/ironic-portgroups-support

https://review.openstack.org/#/c/388756/ - builds on the change above.

3.
https://blueprints.launchpad.net/nova/+spec/libvirt-os-vif-fastpath-vhostuser


https://review.openstack.org/#/c/410737/ - simple change, the one after
it is close too.

4.
https://blueprints.launchpad.net/nova/+spec/resource-providers-scheduler-db-filters


https://review.openstack.org/#/c/418134/ - the bottom change is simple.



We also need another core on this change to move resource classes along:

https://review.openstack.org/#/c/398473/

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Do not recheck changes until 422709 is merged

2017-01-19 Thread Matt Riedemann

On 1/19/2017 10:56 AM, Matt Riedemann wrote:

The py35 unit test job is broken for Nova until this patch is merged:

https://review.openstack.org/#/c/422709/

So please hold off on the rechecks until that happens.



We're good to go again for rechecks.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [networking-sfc]

2017-01-19 Thread Michael Gale
Hello,

Are there updated install docs for sfc? The only install steps for a
testbed I can find are here and they seem outdated:
https://wiki.openstack.org/wiki/Neutron/ServiceInsertionAndChaining

Also from the conference videos there seems to be some Horizon menu /
screens that are available?

Michael
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [nova] [glance] [oslo] webob 1.7

2017-01-19 Thread Corey Bryant
On Thu, Jan 19, 2017 at 11:34 AM, Corey Bryant 
wrote:

>
>
> On Thu, Jan 19, 2017 at 10:46 AM, Ian Cordasco 
> wrote:
>
>> -Original Message-
>> From: Corey Bryant 
>> Reply: OpenStack Development Mailing List (not for usage questions)
>> 
>> Date: January 19, 2017 at 08:52:25
>> To: OpenStack Development Mailing List (not for usage questions)
>> 
>> Subject:  Re: [openstack-dev] [keystone] webob 1.7
>>
>> > On Wed, Jan 18, 2017 at 9:08 AM, Ian Cordasco
>> > wrote:
>> >
>> > > -Original Message-
>> > > From: Chuck Short
>> > > Reply: OpenStack Development Mailing List (not for usage questions)
>> > >
>> > > Date: January 18, 2017 at 08:01:46
>> > > To: OpenStack Development Mailing List
>> > > Subject: [openstack-dev] [keystone] webob 1.7
>> > >
>> > > > Hi
>> > > >
>> > > > We have been expericing problems with newer versions of webob (webob
>> > > 1.7).
>> > > > Reading the changelog, it seems that the upstream developers have
>> > > > introduced some backwards incompatibility with previous versions of
>> webob
>> > > > that seems to be hitting keystone and possibly other projects as
>> well
>> > > > (nova/glance in particular). For keystone this bug has been
>> reported in
>> > > bug
>> > > > #1657452. I would just like to get more developer's eyes on this
>> > > particular
>> > > > issue and possibly get a fix. I suspect its starting to hit other
>> distros
>> > > > as well or already have hit.
>> > >
>> > > Hey Chuck,
>> > >
>> > > This is also affecting Glance
>> > > (https://bugs.launchpad.net/glance/+bug/1657459). I suspect what
>> we'll
>> > > do for now is blacklist the 1.7.x releases in openstack/requirements.
>> > > It seems a bit late in the cycle to bump the minimum version to 1.7.0
>> > > so we can safely fix this without having to deal with
>> > > incompatibilities between versions.
>> > >
>> > > --
>> > > Ian Cordasco
>> > >
>> > > 
>> __
>> > > OpenStack Development Mailing List (not for usage questions)
>> > > Unsubscribe: openstack-dev-requ...@lists.op
>> enstack.org?subject:unsubscribe
>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >
>> >
>> > Hi Ian,
>> >
>> > Were you suggesting there's a new version of webob in the works that
>> fixes
>> > this so we could bump upper-constraints and blacklist 1.7.x?
>>
>> No. I was suggesting that OpenStack not try to work with the 1.7
>> series of WebOb.
>>
>>
> Ok
>
>
>> > Unfortunately at this point we're at webob 1.7.0 in Ubuntu and there's
>> no
>> > going backward for us. The corresponding bugs were already mentioned in
>> > this thread but worth noting again, these are the bugs tracking this:
>> >
>> > https://bugs.launchpad.net/nova/+bug/1657452
>> > https://bugs.launchpad.net/glance/+bug/1657459
>> >
>> > So far this affects nova, glance, and keystone (David has a patch in
>> review
>> > - https://review.openstack.org/#/c/422234/).
>>
>> I'll have to see if we can get that prioritized for Glance next week
>> as a bug fix candidate post Ocata-3. We decided our priorities for the
>> next week just a short while ago. I'm going to see if we can move it
>> onto this week's list though.
>>
>>
> Thanks, that would be great.
>
>
>
Added [nova] and [oslo] to the subject.  This is also affecting nova and
oslo.middleware.  I know Sean's initial response on the thread was that
this shouldn't be a priority for ocata but we're completely blocked by it.
Would those teams be able to prioritize a fix for this?

-- 
Regards,
Corey
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [puppet] Nominating mkarpin for core for the Puppet OpenStack modules

2017-01-19 Thread Alex Schultz
Hey Puppet Cores,

I would like to nominate Mykyta Karpin as a Core reviewer for the
Puppet OpenStack modules.  He has been providing quality patches and
reviews for some time now and I believe he would be a good addition to
the team.  His stats for the last 90 days can be viewed here[0]

Please response with your +1 or any objections. If there are no
objections by Jan 26, I will add him to the core list.

Thanks,
-Alex

[0] http://stackalytics.com/report/contribution/puppet%20openstack-group/90

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove] self-nomination for Trove PTL

2017-01-19 Thread Amrith Kumar
I am writing to submit my candidacy for re-election as the PTL for the
Trove project (Pike cycle). I have been an active technical
contributor to the Trove project since just before the Icehouse
release when Trove was integrated into OpenStack. I have also
contributed code[1] and reviews[2] to some other OpenStack projects,
and have been an active participant in the Stewardship Working Group
[3] (SWG) and a not-so active participant in the Delimiter project.

I believe that in the Pike release we should continue to move forward
with the Trove project and continue to build on the improvements that
we were able to accomplish (some are still work in progress) in the
Ocata release.

- paying down our technical debt, including specifically some long
  standing items that were listed by the TC at the time when Trove was
  integrated, and improving our testing, addressing some long standing
  issues with dependencies between the server and the client, and

- making it easier to use Trove by eliminating the trovestack tool and
  instead offering a set of tools that will serve the purposes of end
  users and deployers alike, and

- adding support for new datastores, capabilities and configurations,
  and

- expanding the community, adding new contributors, contributing
  companies, and end users interested in the project, and

- streamlining the API with the implementation of better versioning
  support.

I would also like to take this opportunity to thank all members of the
development community who helped the Trove project during the Ocata
cycle; those who contributed code and reviews to the project as well
as members of the infra, release, stable, oslo, docs, dib, and other
project teams who helped us on innumerable occasions.

Not to pick on them too much, but I'd especially like to thank Davanum
Srinivas (dims), and Doug Hellmann for all their help on the release
team, and for all the things that they've done to make branching so
much easier. My thanks to Joshua Harlow, the PTL for oslo for his help
and support in getting a particularly knarly set of issues relating to
oslo_messaging.rpc put to rest. Last but not the least, to everyone on
the infra team who helped us with a bunch of changes that helped
considerably in speeding up the Trove check/gate process. The jury is
still out on those changes, but thanks folks for allowing us the
freedom to try the experiment.

Thank you, and I appreciate your support in the election. I have
submitted this candidacy as review [4].

-amrith

[1] http://stackalytics.com/?user_id=amrith=all=commits
[2] http://stackalytics.com/?user_id=amrith=all=marks
[3] https://review.openstack.org/#/c/337895/
[4] https://review.openstack.org/422891


--
Amrith Kumar
GPG: 0x5e48849a9d21a29b



smime.p7s
Description: S/MIME cryptographic signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] (dis)Continuation of Neutron VPNaaS

2017-01-19 Thread Bruno L
Hi,

November last year the Neutron team has announced that VPN as a Service
will be no longer part of Neutron[1].

We run a public cloud based in New Zealand called Catalyst Cloud[2]. Our
customers find the VPN service extremely useful to integrate their cloud
tenant's with on-premise infrastructure or even other clouds. We have
almost one hundred VPNs that were established by customers using it.

While customers could run a compute instance with something like VyOS, they
are used to the convenience of having a service managed by us that is easy
to consume via the APIs or dashboard. It would be a step back for us to
discontinue VPNaaS.

As a result, we are interested in picking up the development of VPNaaS and
keeping it alive. If like us, you are an organisation that sees value in
VPNaaS, please get in touch with me to discuss how we can collaborate on it.

As a first step, we would like to ensure that it continue to pass CI and it
is free of major bugs. Then, we would like to address some of the points
raised in the VPNaaS scorecard[3] to bring it up to standard with other
Neutron services. We don't envisage introducing new features during this
period, but rather focus on stability and maturity.

Could someone from the Neutron team please help us with the questions below?
1) What would be the process to transfer ownership of the project?
2) Until we bring it up to standard, would we need to maintain it as a
separate project, or as part of Neutron?

Cheers,
Bruno

[1]
http://lists.openstack.org/pipermail/openstack-dev/2016-November/107384.html
[2] http://catalyst.net.nz/catalyst-cloud
[3]
http://specs.openstack.org/openstack/neutron-specs/specs/stadium/ocata/neutron-vpnaas.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Ubuntu 14.04 support in Newton and on

2017-01-19 Thread Eric K
Hi Jeremy, thank you for the pointers and the background on Newton!

On 1/19/17, 7:12 AM, "Jeremy Stanley"  wrote:

>On 2017-01-18 15:19:36 -0800 (-0800), Eric K wrote:
>> Hi all, Is there any community-wide policy on how long we strive
>> to maintain compatibility with Ubuntu 14.04? For example by
>> avoiding relying on MySQL 5.7 features. I've had a hard time
>> finding it on openstack.org and ML discussions. Thanks lots!
>
>Years ago the TC (only a few months after they ceased to be the PPB)
>agreed to the following:
>
>OpenStack will target its development efforts to latest
>Ubuntu/Fedora, but will not introduce any changes that would
>make it impossible to run on the latest Ubuntu LTS or latest
>RHEL.
>
>
>http://lists.openstack.org/pipermail/openstack-dev/2012-December/004052.ht
>ml
>
>http://eavesdrop.openstack.org/meetings/tc/2013/tc.2013-01-08-20.02.log.ht
>ml#l-7
>
>You can also find it referenced in our requirements documentation:
>
>
>http://docs.openstack.org/developer/requirements/#finding-distro-status
>
>The upshot has basically been that whatever the "latest Ubuntu LTS"
>was at the time the development cycle began is what we use for the
>purposes of testing development leading up to a given release, and
>is subsequently maintained for testing the resulting stable branches
>from that release until our support end-of-life is reached. However,
>the Newton release ended in an unfortunate situation...
>
>During the Newton development cycle, the Infra team decided to
>provide teams a means of gracefully migrating their testing from
>Ubuntu 14.04 LTS to 16.04 LTS with the expectation that it would be
>completed within one cycle. This did not happen in time for the
>release, and so we wound up with some projects testing stable/newton
>on 16.04 while others were testing on 14.04. Obviously we couldn't
>leave things in that state indefinitely or it would risk breaking
>some project dependencies entirely in that branch, so we pushed to
>get any remaining teams to finish uplifting their stable/newton
>testing to 16.04 soon thereafter.
>
>The result is still that upstream OpenStack, from a QA/testing
>perspective, considers stable/newton to "support" Ubuntu 16.04 LTS
>("latest Ubuntu LTS" at the time its development cycle began), and
>stable/mitaka is the last release we "supported" on Ubuntu 14.04
>LTS. Hopefully that is the answer you're seeking?
>-- 
>Jeremy Stanley
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Order of n-api (placement) and n-sch upgrades for Ocata

2017-01-19 Thread Matt Riedemann
On Thu, Jan 19, 2017 at 2:29 PM, Alex Schultz  wrote:
>
> What are these issues? My original message was to highlight one
> particular deployment type which is completely independent of how
> things get packaged in the traditional sense of the word
> (rpms/deb/tar.gz).  Perhaps it's getting lost in terminology, but
> packaging the software in one way and how it's run can be two separate
> issues.  So what I'd like to know is how is that impacted by whatever
> ordering is necessary, and if there's anyway way not to explicitly
> have special cases that need to be handled by the end user when
> applying updates.  It seems like we all want similar things. I would
> like not to have to do anything different from the install for
> upgrade. Why can't apply configs, restart all services?  Or can I?  I
> seem to be getting mixed messages...
>
>

Sorry for being unclear on the issue. As Jay pointed out, if
nova-scheduler is upgraded before the placement service, the
nova-scheduler service will continue to start and take requests. The
problem is if the filter scheduler code is requesting a microversion
in the placement API which isn't available yet, in particular this 1.4
microversion, then scheduling requests will fail which to the end user
means NoValidHost (the same as if we don't have any compute nodes yet,
or available).

So as Jay also pointed out, if placement and n-sch are upgraded and
restarted at the same time, the window for hitting this is minimal. If
deployment tooling is written to make sure to restart the placement
service *before* nova-scheduler, then there should be no window for
issues.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa] PTL non-candidacy

2017-01-19 Thread Masayuki Igawa
Thank you for your great effort! I'm very proud of you as a colleague :)

-- Masayuki Igawa

On Fri, Jan 20, 2017 at 6:16 AM, Ken'ichi Ohmichi  wrote:
> Hi,
>
> I will step down as PTL after this Ocata cycle.
> I was happy to see new ideas and folks who try making ideas true in
> this 2 cycles.
> Now QA project has a lot of components with many people's effort and
> we help each other as a community.
> This experience is very exciting for me, I am proud to being a member
> in this community.
>
> Today, I'd like to concentrate on coding and reviewing again as a developer.
> I think we have good candidates for a next PTL, and I will keep active
> under the next PTL's leadership.
>
> Thanks for choosing me anyways, let's make OpenStack quality better together 
> :-)
>
> Thanks
> Ken Ohmichi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Order of n-api (placement) and n-sch upgrades for Ocata

2017-01-19 Thread Alex Schultz
On Thu, Jan 19, 2017 at 11:45 AM, Jay Pipes  wrote:
> On 01/19/2017 01:18 PM, Alex Schultz wrote:
>>
>> On Thu, Jan 19, 2017 at 10:34 AM, Jay Pipes  wrote:
>>>
>>> On 01/19/2017 11:25 AM, Alex Schultz wrote:


 On Thu, Jan 19, 2017 at 8:27 AM, Matt Riedemann
  wrote:
>
>
> Sylvain and I were talking about how he's going to work placement
> microversion requests into his filter scheduler patch [1]. He needs to
> make
> requests to the placement API with microversion 1.4 [2] or later for
> resource provider filtering on specific resource classes like VCPU and
> MEMORY_MB.
>
> The question was what happens if microversion 1.4 isn't available in
> the
> placement API, i.e. the nova-scheduler is running Ocata code now but
> the
> placement service is running Newton still.
>
> Our rolling upgrades doc [3] says:
>
> "It is safest to start nova-conductor first and nova-api last."
>
> But since placement is bundled with n-api that would cause issues since
> n-sch now depends on the n-api code.
>
> If you package the placement service separately from the nova-api
> service
> then this is probably not an issue. You can still roll out n-api last
> and
> restart it last (for control services), and just make sure that
> placement
> is
> upgraded before nova-scheduler (we need to be clear about that in [3]).
>
> But do we have any other issues if they are not packaged separately? Is
> it
> possible to install the new code, but still only restart the placement
> service before nova-api? I believe it is, but want to ask this out
> loud.
>

 Forgive me as I haven't looked really in depth, but if the api and
 placement api are both collocated in the same apache instance this is
 not necessarily the simplest thing to achieve.  While, yes it could be
 achieved it will require more manual intervention of custom upgrade
 scripts. To me this is not a good idea. My personal preference (now
 having dealt with multiple N->O nova related acrobatics) is that these
 types of requirements not be made.  We've already run into these
 assumptions for new installs as well specifically in this newer code.
 Why can't we turn all the services on and they properly enter a wait
 state until such conditions are satisfied?
>>>
>>>
>>>
>>> Simply put, because it adds a bunch of conditional, temporary code to the
>>> Nova codebase as a replacement for well-documented upgrade steps.
>>>
>>> Can we do it? Yes. Is it kind of a pain in the ass? Yeah, mostly because
>>> of
>>> the testing requirements.
>>>
>>
>> 
>> You mean understanding how people actually consume your software and
>> handling those cases?  To me this is the fundamental problem if you
>> want software adoption, understand your user.
>
>
> The fact that we have these conversations should indicate that we are
> concerned about users. Nova developers, more than any other OpenStack
> project, has gone out of its way to put smooth upgrade processes as the
> project's highest priority.
>

I understand that may seem like that's the case but based on my
interactions this cycle, the smooth upgrade process hasn't always been
apparent lately.

> However, deployment/packaging concerns aren't necessarily cloud *user*
> concerns. And I don't mean to sound like I'm brushing off the concerns of
> deployers, but deployers don't necessarily *use* the software we produce
> either. They install/package it/deploy it. It's application developer teams
> that *use* the software.
>

I disagree. When you develop something you have different types of
users.  In the case of OpenStack, you are correct that 'cloud users'
are one of your users. 'Deployers' and 'Operators' are additional
categories of 'users'. It seems like many times the priorities are
shifted to the 'cloud users' but for things like Nova some of the
functionality is also around how can an operator/deployer expose a
resource to the end user and what does it mean to do that.  IMHO these
considerations for each user category for Nova need to be weighted
differently than like Horizon where it's probably more on the cloud
user category.  'Cloud users' don't use nova-manage. That's a piece of
software written specifically for deployers/operators.  So yes, Nova
writes software for both sets of users and when you get feedback from
one of those sets of users it needs to be taken into consideration.
What I'm attempting to expose is this thought process because
sometimes it gets lost as people want to expose new awesome features
to the 'cloud user'.  But if no one can deploy the update, how can the
'cloud user' use it?

> What we're really talking about here is catering to a request that simply
> doesn't have much real-world impact -- to cloud users *or* to deployers,
> even those using 

[openstack-dev] [neutron] neutron-lib impact: portbindings extension moved to neutron-lib

2017-01-19 Thread Boden Russell
A new version (1.1.0) of neutron-lib was recently released.
Among other things, this release rehomes the neutron portbindings API
extension [1].

A consumption patch to use the rehomed code has been submitted to
neutron [2] and once merged will impact consumers who use portbindings
constants from neutron.

While a patch for each affected project has been submitted [3], I only
plan to shepherd those patches in [3] that target stadium projects. For
all others (non-stadium) please encourage your team to help drive the
patch through review.

For more details on consuming neutron-lib, please see [4].


[1] https://review.openstack.org/411960/
[2] https://review.openstack.org/422210/
[3] https://review.openstack.org/#/q/topic:rehome-portbindings-apidef
[4]
https://github.com/openstack/neutron-lib/blob/master/doc/source/contributing.rst#phase-4-consume

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] [ptl] PTL candidacy for Pike

2017-01-19 Thread Lance Bragstad
Greetings,

I want to run for keystone PTL to facilitate an environment for others to
grow and make meaningful changes so that we continue to build keystone into
a more stable, scalable and performant project [0].

January marks my fifth anniversary working with OpenStack. In that time
I've had the opportunity to participate in a variety of different roles
from development to deployment. Being exposed to such a fast-paced
open-source project has made profound impacts on how I approach everyday
challenges.

Joining the OpenStack community was a daunting task, there was a staggering
amount of information to absorb. Fortunately, the community was so
welcoming that learning was a huge reward. I feel the community, and the
keystone team in particular, still maintains this camaraderie. This is
something I'd like to continue when serving as PTL.

Over the last few years I have worked on various keystone initiatives. I
co-implemented support for Fernet tokens, which results in keystone being
more scalable and performant. As of the Ocata release, Fernet tokens are
the default token format providing scalability out-of-the-box. This helped
spur an effort I led to refactor keystone's token API to make it simpler
and easier to maintain. I automated the ability to performance test patches
in review against master and publish the delta as a comment on review,
providing reviewers with a performance-related datapoint. Lately I've been
focused on organizing cross-project efforts to address gaps in policy
across OpenStack. Those are only a couple recent examples I'm proud of. I
actively try to take some experience or lesson from every interaction I
have with the community and add it to my repertoire.

As PTL, I would like to continue building an environment that enables and
inspires people to contribute. We still have many goals to work towards,
and it will never be completed by a single person. Building a community
around trust and transparency will yield consistent, measurable results. I
think the keystone community has done a great job of this so far and I want
to accelerate that trend.

I would like to continue improving the overall usability of policy across
OpenStack, which will benefit users and deployers significantly. I will
continue to push for federated identity to be a first class resource. I
believe it should absolutely be a natural extension of keystone for both
deployers and users. I will continue to keep performance at the forefront
of our goals. I will continue to be an advocate for cross-project
communication. I will lead an effort to dedicate one day per week to office
hours, where we triage and attempt to close bugs. This will serve as a
great way to grow our community and keep tabs on our bug queue.

My long-term vision for keystone allows deployers the flexibility to
address real-world use cases across a variety of deployments while
providing consistent user-experience and stability. To do that we're going
to have to solve some hard problems around policy, federation,
upgradability, etc. But, we've solved hard problems before. The following
are a few things I'd like to focus on in Pike:


   -

   Introduce better granularity for RBAC support using keystone, and
   leading by example
   -

   Continue improving functional testing
   -

   Continue making experiences with federation seamless and intuitive
   -

   Continue to support rolling upgrades
   -

   Help guide work to implement rolling upgrade testing to achieve the
   rolling upgrade tag
   -

   Continuing our work from the last few cycles to promote usage of the V3
   API everywhere


Some personal goals of mine as a PTL would be to:


   -

   Facilitate collaboration by encouraging break out work and sprints
   -

   Add more communication tools to our toolbox by actively looking for new
   ways to share ideas
   -

   Ensure our discussions, decisions, and outcomes are easily discoverable
   and thoroughly communicated
   -

   Build upon the established pattern of having dedicated roles for design
   discussions (i.e. moderator, champion, scribe) to ensure we have
   meaningful, productive discussions that are accurately captured
   -

   Actively look for opportunities to mentor or collaborate with new and
   existing team members
   -

   Promote an environment where we can learn from failed attempts and
   iterate to find more robust solutions


Finally I want to say thanks for taking time out of your day to parse this
note. I'm excited to get started on Pike regardless of the election
results. I look forward to seeing you all in Atlanta!


Best Regards,

Lance


[0] https://review.openstack.org/#/c/422805/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qa] PTL non-candidacy

2017-01-19 Thread Ken'ichi Ohmichi
Hi,

I will step down as PTL after this Ocata cycle.
I was happy to see new ideas and folks who try making ideas true in
this 2 cycles.
Now QA project has a lot of components with many people's effort and
we help each other as a community.
This experience is very exciting for me, I am proud to being a member
in this community.

Today, I'd like to concentrate on coding and reviewing again as a developer.
I think we have good candidates for a next PTL, and I will keep active
under the next PTL's leadership.

Thanks for choosing me anyways, let's make OpenStack quality better together :-)

Thanks
Ken Ohmichi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [release][ptl] Release countdown for week R-4 (Ocata-3 Milestone), Jan 23-27

2017-01-19 Thread Doug Hellmann
We're rapidly approaching the end of the Ocata cycle. There are several
deadlines coming up within the next few weeks, so please review the
schedule and make sure you understand how they affect your teams.

Focus
-

This week begins Feature Freeze for all milestone-based projects.
No feature patches should be landed after this point. Exceptions
may be granted by the project PTL.

This week is the final release deadline for client libraries. The
deadline for client library releases is Thursday 26 Jan. We do not
grant Feature Freeze Extensions for any libraries, so that is a
hard freeze date. Any feature work that requires updates to client
libraries should be prioritized so it can be completed by that time.

This week starts the Soft StringFreeze. You are no longer allowed
to accept proposed changes containing modifications in user-facing
strings.  Such changes should be rejected by the review team and
postponed until the next series development opens (which should
happen when RC1 is published).

This week starts the Requirements freeze. After the milestone, only
critical requirements and constraints changes will be allowed.
Freezing our requirements list gives packagers downstream an
opportunity to catch up and prepare packages for everything necessary
for distributions of the upcoming release. The requirements remain
frozen until the stable branches are created, with the release
candidates.

Release Tasks
-

Prepare final release and branch requests for all client libraries.

Milestone-based projects should ensure that the membership of your
$project-release gerrit groups is up to date with the team who will
finalize the project release. Please coordinate with the release
management team if you have any questions.

This is a good time to review stable branches for unreleased changes
and prepare those releases, too.

Teams should prepare their documentation for completing the
community-wide goal of removing the use of deprecated Oslo libraries.
See https://governance.openstack.org/tc/goals/index.html for details
about how to communicate your status on the goal.

General Notes
-

The RC1 target week in R-3 is only 1 week after freeze. This is
different from our usual 2 week freeze period, so please stay on
top of reviews and minimize FFEs accordingly.

We will start the soft string freeze during R-4 (23-27 Jan). See
https://releases.openstack.org/ocata/schedule.html#o-soft-sf for
details

The release team is now publishing the release calendar using ICS.
Subscribe your favorite calendaring software to
https://releases.openstack.org/schedule.ics for automatic updates.

Important Dates
---

Ocata 3 Milestone, with Feature and Requirements Freezes: 26 Jan

Ocata RC1 target: 2 Feb

Ocata Final Release candidate deadline: 16 Feb

Ocata release schedule: http://releases.openstack.org/ocata/schedule.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Order of n-api (placement) and n-sch upgrades for Ocata

2017-01-19 Thread Jay Pipes

On 01/19/2017 12:59 PM, Eoghan Glynn wrote:

I think Alex is suggesting something different than falling back to the
legacy behaviour. The ocata scheduler would still roll forward to basing
its node selection decisions on data provided by the placement API, but
would be tolerant of the 3 different transient cases that are problematic:

 1. placement API momentarily not running yet

 2. placement API already running, but still on the newton micro-version

 3. placement API already running ocata code, but not yet warmed up

IIUC Alex is suggesting that the nova services themselves are tolerant
of those transient conditions during the upgrade, rather than requiring
multiple upgrade toolings to independently force the new ordering
constraint.

On my superficial understanding, case #3 would require the a freshly
deployed ocata placement (i.e. when upgraded from a placement-less
newton deployment) to detect that it's being run for the first time
(i.e. no providers reported yet) and return say 503s to the scheduler
queries until enough time has passed for all computes to have reported
in their inventories & allocations.


As mentioned to Alex, I'm totally cool with the scheduler returning 
failures to the end user for some amount of time while the placement API 
service is upgraded (if the deployment tooling upgraded the schedulers 
before the placement API).


What nobody wants to see is the scheduler *die* due to placement API 
version issues or placement API connectivity. The scheduler should 
remain operational/up, but be logging errors continually in this case.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Order of n-api (placement) and n-sch upgrades for Ocata

2017-01-19 Thread Jay Pipes

On 01/19/2017 01:18 PM, Alex Schultz wrote:

On Thu, Jan 19, 2017 at 10:34 AM, Jay Pipes  wrote:

On 01/19/2017 11:25 AM, Alex Schultz wrote:


On Thu, Jan 19, 2017 at 8:27 AM, Matt Riedemann
 wrote:


Sylvain and I were talking about how he's going to work placement
microversion requests into his filter scheduler patch [1]. He needs to
make
requests to the placement API with microversion 1.4 [2] or later for
resource provider filtering on specific resource classes like VCPU and
MEMORY_MB.

The question was what happens if microversion 1.4 isn't available in the
placement API, i.e. the nova-scheduler is running Ocata code now but the
placement service is running Newton still.

Our rolling upgrades doc [3] says:

"It is safest to start nova-conductor first and nova-api last."

But since placement is bundled with n-api that would cause issues since
n-sch now depends on the n-api code.

If you package the placement service separately from the nova-api service
then this is probably not an issue. You can still roll out n-api last and
restart it last (for control services), and just make sure that placement
is
upgraded before nova-scheduler (we need to be clear about that in [3]).

But do we have any other issues if they are not packaged separately? Is
it
possible to install the new code, but still only restart the placement
service before nova-api? I believe it is, but want to ask this out loud.



Forgive me as I haven't looked really in depth, but if the api and
placement api are both collocated in the same apache instance this is
not necessarily the simplest thing to achieve.  While, yes it could be
achieved it will require more manual intervention of custom upgrade
scripts. To me this is not a good idea. My personal preference (now
having dealt with multiple N->O nova related acrobatics) is that these
types of requirements not be made.  We've already run into these
assumptions for new installs as well specifically in this newer code.
Why can't we turn all the services on and they properly enter a wait
state until such conditions are satisfied?



Simply put, because it adds a bunch of conditional, temporary code to the
Nova codebase as a replacement for well-documented upgrade steps.

Can we do it? Yes. Is it kind of a pain in the ass? Yeah, mostly because of
the testing requirements.




You mean understanding how people actually consume your software and
handling those cases?  To me this is the fundamental problem if you
want software adoption, understand your user.


The fact that we have these conversations should indicate that we are 
concerned about users. Nova developers, more than any other OpenStack 
project, has gone out of its way to put smooth upgrade processes as the 
project's highest priority.


However, deployment/packaging concerns aren't necessarily cloud *user* 
concerns. And I don't mean to sound like I'm brushing off the concerns 
of deployers, but deployers don't necessarily *use* the software we 
produce either. They install/package it/deploy it. It's application 
developer teams that *use* the software.


What we're really talking about here is catering to a request that 
simply doesn't have much real-world impact -- to cloud users *or* to 
deployers, even those using continuous delivery mechanisms.


If there is a few seconds of log lines outputting error messages and 
some 400 requests returned from the scheduler while a placement API 
service is upgraded and restarted (again, ONLY if the placement API 
service is upgraded after the scheduler) I'm cool with that. It's really 
not a huge deal to me.


What *would* be a big deal is if any of the following occur:

a) The scheduler dies a horrible death and goes offline
b) Any of the compute nodes failed and went offline
c) Anything regarding the tenant data plane was disrupted

Those are the real concerns for us, and if we have introduced code that 
results in any of the above, we absolutely will prioritize bug fixes ASAP.


But, as far as I know, we have *not* introduce code that would result in 
any of the above.


> Know what you're doing

and the impact on them.


Yeah, sorry, but we absolutely *are* concerned about users. What we're 
not as concerned about is a few seconds of temporary disruption to the 
control plane.


>  I was just raising awareness around how some

people are deploying this stuff because it feels that sometimes folks
just don't know or don't care.


We *do* care, thus this email and the ongoing conversations on IRC.

>  So IMHO adding service startup/restart

ordering requirements is not ideal for the person who has to run your
software because it makes the entire process hard and more complex.


Unless I'm mistaken, this is not *required ordering*. It's recommended 
ordering of service upgrade/restarts in order to minimize/eliminate 
downtime of the control plane, but the scheduler service shouldn't die 
due to these issues. The scheduler should just keep 

Re: [openstack-dev] [infra][mitmstack] initial member of mitmstack groups

2017-01-19 Thread Clark Boylan
On Thu, Jan 19, 2017, at 08:19 AM, Yujun Zhang wrote:
> Hi, Infra team,
> 
> Could you please help add me  as initial member
> in
> mitmstack-core 
> and mitmstack-release
> ? Thank you.

All done.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Order of n-api (placement) and n-sch upgrades for Ocata

2017-01-19 Thread Alex Schultz
On Thu, Jan 19, 2017 at 10:34 AM, Jay Pipes  wrote:
> On 01/19/2017 11:25 AM, Alex Schultz wrote:
>>
>> On Thu, Jan 19, 2017 at 8:27 AM, Matt Riedemann
>>  wrote:
>>>
>>> Sylvain and I were talking about how he's going to work placement
>>> microversion requests into his filter scheduler patch [1]. He needs to
>>> make
>>> requests to the placement API with microversion 1.4 [2] or later for
>>> resource provider filtering on specific resource classes like VCPU and
>>> MEMORY_MB.
>>>
>>> The question was what happens if microversion 1.4 isn't available in the
>>> placement API, i.e. the nova-scheduler is running Ocata code now but the
>>> placement service is running Newton still.
>>>
>>> Our rolling upgrades doc [3] says:
>>>
>>> "It is safest to start nova-conductor first and nova-api last."
>>>
>>> But since placement is bundled with n-api that would cause issues since
>>> n-sch now depends on the n-api code.
>>>
>>> If you package the placement service separately from the nova-api service
>>> then this is probably not an issue. You can still roll out n-api last and
>>> restart it last (for control services), and just make sure that placement
>>> is
>>> upgraded before nova-scheduler (we need to be clear about that in [3]).
>>>
>>> But do we have any other issues if they are not packaged separately? Is
>>> it
>>> possible to install the new code, but still only restart the placement
>>> service before nova-api? I believe it is, but want to ask this out loud.
>>>
>>
>> Forgive me as I haven't looked really in depth, but if the api and
>> placement api are both collocated in the same apache instance this is
>> not necessarily the simplest thing to achieve.  While, yes it could be
>> achieved it will require more manual intervention of custom upgrade
>> scripts. To me this is not a good idea. My personal preference (now
>> having dealt with multiple N->O nova related acrobatics) is that these
>> types of requirements not be made.  We've already run into these
>> assumptions for new installs as well specifically in this newer code.
>> Why can't we turn all the services on and they properly enter a wait
>> state until such conditions are satisfied?
>
>
> Simply put, because it adds a bunch of conditional, temporary code to the
> Nova codebase as a replacement for well-documented upgrade steps.
>
> Can we do it? Yes. Is it kind of a pain in the ass? Yeah, mostly because of
> the testing requirements.
>


You mean understanding how people actually consume your software and
handling those cases?  To me this is the fundamental problem if you
want software adoption, understand your user. Know what you're doing
and the impact on them.  I was just raising awareness around how some
people are deploying this stuff because it feels that sometimes folks
just don't know or don't care.  So IMHO adding service startup/restart
ordering requirements is not ideal for the person who has to run your
software because it makes the entire process hard and more complex.
Why use this when I can just buy a product that does this for me and
handles these types of cases?  We're not all containers yet which
might alleviate some of this but as there was a push for the placement
service specifically to be in a shared vhost, this recommended
deployment method introduces these kind of complexities. It's not
something that just affects me.  Squeaky wheel gets the hose, I mean
grease.


> But meh, I can whip up an amendment to Sylvain's patch that would add the
> self-healing/fallback to legacy behaviour if this is what the operator
> community insists on.
>
> I think Matt generally has been in the "push forward" camp because we're
> tired of delaying improvements to Nova because of some terror that we may
> cause some deployer somewhere to restart their controller services in a
> particular order in order to minimize any downtime of the control plane.
>
> For the distributed compute nodes, I totally understand the need to tolerate
> long rolling upgrade windows. For controller nodes/services, what we're
> talking about here is adding code into Nova scheduler to deal with what in
> 99% of cases will be something that isn't even noticed because the upgrade
> tooling will be restarting all these nodes at almost the same time and the
> momentary failures that might be logged on the scheduler (400s returned from
> the placement API due to using an unknown parameter in a GET request) will
> only exist for a second or two as the upgrade completes.

So in our case they will get (re)started at the same time. If that's
not a problem, great.  I've seen services in the past where it's been
a problem when a service actually won't start because the dependent
service is not up yet. That's what I wanted to make sure is not the
case here.  So if we have documented assurance that restarting both at
the same time won't cause any problems or the interaction is that the
api service won't be 'up' until the 

Re: [openstack-dev] [nova] Order of n-api (placement) and n-sch upgrades for Ocata

2017-01-19 Thread Eoghan Glynn

> >> Sylvain and I were talking about how he's going to work placement
> >> microversion requests into his filter scheduler patch [1]. He needs to
> >> make
> >> requests to the placement API with microversion 1.4 [2] or later for
> >> resource provider filtering on specific resource classes like VCPU and
> >> MEMORY_MB.
> >>
> >> The question was what happens if microversion 1.4 isn't available in the
> >> placement API, i.e. the nova-scheduler is running Ocata code now but the
> >> placement service is running Newton still.
> >>
> >> Our rolling upgrades doc [3] says:
> >>
> >> "It is safest to start nova-conductor first and nova-api last."
> >>
> >> But since placement is bundled with n-api that would cause issues since
> >> n-sch now depends on the n-api code.
> >>
> >> If you package the placement service separately from the nova-api service
> >> then this is probably not an issue. You can still roll out n-api last and
> >> restart it last (for control services), and just make sure that placement
> >> is
> >> upgraded before nova-scheduler (we need to be clear about that in [3]).
> >>
> >> But do we have any other issues if they are not packaged separately? Is it
> >> possible to install the new code, but still only restart the placement
> >> service before nova-api? I believe it is, but want to ask this out loud.
> >>
> >
> > Forgive me as I haven't looked really in depth, but if the api and
> > placement api are both collocated in the same apache instance this is
> > not necessarily the simplest thing to achieve.  While, yes it could be
> > achieved it will require more manual intervention of custom upgrade
> > scripts. To me this is not a good idea. My personal preference (now
> > having dealt with multiple N->O nova related acrobatics) is that these
> > types of requirements not be made.  We've already run into these
> > assumptions for new installs as well specifically in this newer code.
> > Why can't we turn all the services on and they properly enter a wait
> > state until such conditions are satisfied?
> 
> Simply put, because it adds a bunch of conditional, temporary code to
> the Nova codebase as a replacement for well-documented upgrade steps.
> 
> Can we do it? Yes. Is it kind of a pain in the ass? Yeah, mostly because
> of the testing requirements.
> 
> But meh, I can whip up an amendment to Sylvain's patch that would add
> the self-healing/fallback to legacy behaviour if this is what the
> operator community insists on.

I think Alex is suggesting something different than falling back to the
legacy behaviour. The ocata scheduler would still roll forward to basing
its node selection decisions on data provided by the placement API, but
would be tolerant of the 3 different transient cases that are problematic:

 1. placement API momentarily not running yet

 2. placement API already running, but still on the newton micro-version

 3. placement API already running ocata code, but not yet warmed up

IIUC Alex is suggesting that the nova services themselves are tolerant
of those transient conditions during the upgrade, rather than requiring
multiple upgrade toolings to independently force the new ordering
constraint.

On my superficial understanding, case #3 would require the a freshly
deployed ocata placement (i.e. when upgraded from a placement-less
newton deployment) to detect that it's being run for the first time
(i.e. no providers reported yet) and return say 503s to the scheduler
queries until enough time has passed for all computes to have reported
in their inventories & allocations.

Cheers,
Eoghan 

 
> I think Matt generally has been in the "push forward" camp because we're
> tired of delaying improvements to Nova because of some terror that we
> may cause some deployer somewhere to restart their controller services
> in a particular order in order to minimize any downtime of the control
> plane.
> 
> For the distributed compute nodes, I totally understand the need to
> tolerate long rolling upgrade windows. For controller nodes/services,
> what we're talking about here is adding code into Nova scheduler to deal
> with what in 99% of cases will be something that isn't even noticed
> because the upgrade tooling will be restarting all these nodes at almost
> the same time and the momentary failures that might be logged on the
> scheduler (400s returned from the placement API due to using an unknown
> parameter in a GET request) will only exist for a second or two as the
> upgrade completes.
> 
> So, yeah, a lot of work and testing for very little real-world benefit,
> which is why a number of us just want to more forward...
> 
> Best,
> -jay
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 


Re: [openstack-dev] [nova] Order of n-api (placement) and n-sch upgrades for Ocata

2017-01-19 Thread Jay Pipes

On 01/19/2017 11:25 AM, Alex Schultz wrote:

On Thu, Jan 19, 2017 at 8:27 AM, Matt Riedemann
 wrote:

Sylvain and I were talking about how he's going to work placement
microversion requests into his filter scheduler patch [1]. He needs to make
requests to the placement API with microversion 1.4 [2] or later for
resource provider filtering on specific resource classes like VCPU and
MEMORY_MB.

The question was what happens if microversion 1.4 isn't available in the
placement API, i.e. the nova-scheduler is running Ocata code now but the
placement service is running Newton still.

Our rolling upgrades doc [3] says:

"It is safest to start nova-conductor first and nova-api last."

But since placement is bundled with n-api that would cause issues since
n-sch now depends on the n-api code.

If you package the placement service separately from the nova-api service
then this is probably not an issue. You can still roll out n-api last and
restart it last (for control services), and just make sure that placement is
upgraded before nova-scheduler (we need to be clear about that in [3]).

But do we have any other issues if they are not packaged separately? Is it
possible to install the new code, but still only restart the placement
service before nova-api? I believe it is, but want to ask this out loud.



Forgive me as I haven't looked really in depth, but if the api and
placement api are both collocated in the same apache instance this is
not necessarily the simplest thing to achieve.  While, yes it could be
achieved it will require more manual intervention of custom upgrade
scripts. To me this is not a good idea. My personal preference (now
having dealt with multiple N->O nova related acrobatics) is that these
types of requirements not be made.  We've already run into these
assumptions for new installs as well specifically in this newer code.
Why can't we turn all the services on and they properly enter a wait
state until such conditions are satisfied?


Simply put, because it adds a bunch of conditional, temporary code to 
the Nova codebase as a replacement for well-documented upgrade steps.


Can we do it? Yes. Is it kind of a pain in the ass? Yeah, mostly because 
of the testing requirements.


But meh, I can whip up an amendment to Sylvain's patch that would add 
the self-healing/fallback to legacy behaviour if this is what the 
operator community insists on.


I think Matt generally has been in the "push forward" camp because we're 
tired of delaying improvements to Nova because of some terror that we 
may cause some deployer somewhere to restart their controller services 
in a particular order in order to minimize any downtime of the control 
plane.


For the distributed compute nodes, I totally understand the need to 
tolerate long rolling upgrade windows. For controller nodes/services, 
what we're talking about here is adding code into Nova scheduler to deal 
with what in 99% of cases will be something that isn't even noticed 
because the upgrade tooling will be restarting all these nodes at almost 
the same time and the momentary failures that might be logged on the 
scheduler (400s returned from the placement API due to using an unknown 
parameter in a GET request) will only exist for a second or two as the 
upgrade completes.


So, yeah, a lot of work and testing for very little real-world benefit, 
which is why a number of us just want to more forward...


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Subject: [all][api] POST /api-wg/news

2017-01-19 Thread Ed Leafe
Greetings OpenStack community,

Today's meeting [0] was a relatively quiet one; I attribute it to cdent's 
absence. :)

Most of the meeting was concerned with reviews of existing issues, and those 
didn't generate much discussion. There was some discussion of cdent's response 
to the OpenStack Technical Committee's discussion on "Updating 
stability/compatibility guidelines". Chris created a patch "[WIP] Refactor and 
re-validate api change guidelines" at https://review.openstack.org/#/c/421846, 
and posted an email describing his take on the issue: 
http://lists.openstack.org/pipermail/openstack-dev/2017-January/110384.html. 
All are encouraged to comment on the review. In Chris's words: "I wanted to be 
sure that the guideline got a rewrite to express that goal zero is to make 
users happy and that trumps everything"

Lots of other guidelines in progress and awaiting your feedback if you have 
time to give it (see below).

# Newly Published Guidelines

* Accurate status code vs. Backwards compatibility
  https://review.openstack.org/#/c/422264/

* fix no sample file in browser
  https://review.openstack.org/#/c/421084/

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

* Add guidelines on usage of state vs. status
  https://review.openstack.org/#/c/411528/

* Clarify the status values in versions
  https://review.openstack.org/#/c/411849/

* Add guideline for invalid query parameters
  https://review.openstack.org/417441

# Guidelines Currently Under Review [3]

* Add guidelines for boolean names
  https://review.openstack.org/#/c/411529/

* Define pagination guidelines
  https://review.openstack.org/#/c/390973/

* Add API capabilities discovery guideline
  https://review.openstack.org/#/c/386555/

# Highlighting your API impacting issues

If you seek further review and insight from the API WG, please address your 
concerns in an email to the OpenStack developer mailing list[1] with the tag 
"[api]" in the subject. In your email, you should include any relevant reviews, 
links, and comments to help guide the discussion of the specific challenge you 
are facing.

To learn more about the API WG mission and the work we do, see OpenStack API 
Working Group [2].

Thanks for reading and see you next week!

# References

[0] 
http://eavesdrop.openstack.org/meetings/api_wg/2017/api_wg.2017-01-19-16.00.log.html
[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z

Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_wg/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg


-- Ed Leafe






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] CI Squad Meeting Summary (week 4)

2017-01-19 Thread Attila Darazs
Everybody interested in the TripleO CI and Quickstart is welcome to join 
the weekly meeting:


Time: Thursdays, 15:30-16:30 UTC
Place: https://bluejeans.com/4113567798/

Here's this week's summary:

* There aren't any blockers or bottlenecks slowing down the transition 
to the Quickstart based CI. We're right on track.


* The Quickstart OVB jobs are running stable. Yesterday they broke due 
to a tripleo-ci change, but Sagi fixed them today.


* The Quickstart multinode nodepool job is also working well. It's a 
good basis for extending our feature coverage.


* The ovb-ha-oooq-nv and nonha-multinode-oooq-nv jobs are moving in the 
check-tripleo queue to make sure we catch any change that breaks these 
new jobs[1].


* We have a few quickstart log collection usability improvements are on 
the way: soon all the text based logs are going to be renamed to end in 
txt.gz, making them browsable from the log servers[2]. Also the log 
collection output will be in a log file instead of dumped on a console.


* We are trying to reduce the number of unnecessary OVB jobs by limiting 
the files we trigger on, but openstack-infra doesn't like our current 
approach[3]. We brainstormed about alternative solutions (see the 
meeting minutes for details).


* Ben Kero proposed a PTG CI session about the CI moving to use 
Quickstart. Emilien is suggesting to create a second one regarding 
reusing the scenario jobs for container tests.


* There's a draft for the "pre-flight check list"[4] for the CI 
transition made by Gabrielle to make sure the quickstart based jobs will 
have the same coverage or better than the current CI system.


* We are going to have a design session about the handling of the config 
files for these new jobs on Wednesday the 25th, 15:00 UTC.


The full meeting minutes are here: 
https://etherpad.openstack.org/p/tripleo-ci-squad-meeting


Best regards,
Attila

[1] https://review.openstack.org/422646
[2] https://review.openstack.org/422638
[3] https://review.openstack.org/421525
[4] https://etherpad.openstack.org/p/oooq-tripleo-ci-check-list

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Do not recheck changes until 422709 is merged

2017-01-19 Thread Matt Riedemann

The py35 unit test job is broken for Nova until this patch is merged:

https://review.openstack.org/#/c/422709/

So please hold off on the rechecks until that happens.

--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Order of n-api (placement) and n-sch upgrades for Ocata

2017-01-19 Thread Sylvain Bauza


Le 19/01/2017 17:00, Matt Riedemann a écrit :
> On 1/19/2017 9:43 AM, Sylvain Bauza wrote:
>>
>>
>> Le 19/01/2017 16:27, Matt Riedemann a écrit :
>>> Sylvain and I were talking about how he's going to work placement
>>> microversion requests into his filter scheduler patch [1]. He needs to
>>> make requests to the placement API with microversion 1.4 [2] or later
>>> for resource provider filtering on specific resource classes like VCPU
>>> and MEMORY_MB.
>>>
>>> The question was what happens if microversion 1.4 isn't available in the
>>> placement API, i.e. the nova-scheduler is running Ocata code now but the
>>> placement service is running Newton still.
>>>
>>> Our rolling upgrades doc [3] says:
>>>
>>> "It is safest to start nova-conductor first and nova-api last."
>>>
>>> But since placement is bundled with n-api that would cause issues since
>>> n-sch now depends on the n-api code.
>>>
>>> If you package the placement service separately from the nova-api
>>> service then this is probably not an issue. You can still roll out n-api
>>> last and restart it last (for control services), and just make sure that
>>> placement is upgraded before nova-scheduler (we need to be clear about
>>> that in [3]).
>>>
>>> But do we have any other issues if they are not packaged separately? Is
>>> it possible to install the new code, but still only restart the
>>> placement service before nova-api? I believe it is, but want to ask this
>>> out loud.
>>>
>>> I think we're probably OK here but I wanted to ask this out loud and
>>> make sure everyone is aware and can think about this as we're a week
>>> from feature freeze. We also need to look into devstack/grenade because
>>> I'm fairly certain that we upgrade n-sch *before* placement in a grenade
>>> run which will make any issues here very obvious in [1].
>>>
>>> [1] https://review.openstack.org/#/c/417961/
>>> [2]
>>> http://docs.openstack.org/developer/nova/placement.html#filter-resource-providers-having-requested-resource-capacity
>>>
>>>
>>> [3]
>>> http://docs.openstack.org/developer/nova/upgrade.html#rolling-upgrade-process
>>>
>>>
>>>
>>
>> I thought out loud in the nova channel at the following possibility :
>> since we always ask to upgrade n-cpus *AFTER* upgrading our other
>> services, we could imagine to allow the nova-scheduler gently accept to
>> have a placement service be Newton *UNLESS* you have Ocata computes.
>>
>> On other technical words, the scheduler getting a response from the
>> placement service is an hard requirement for Ocata. That said, if the
>> response code is a 400 with a message saying that the schema is
>> incorrect, it would be checking the max version of all the computes and
>> then :
>>  - either the max version is Newton and then call back the
>> ComputeNodeList.get_all() for getting the list of nodes
>>  - or, the max version is Ocata (at least one node is upgraded), and
>> then we would throw a NoValidHosts
>>
>> That way, the upgrade path would be :
>>  1/ upgrade your conductor
>>  2/ upgrade all your other services but n-cpus (we could upgrade and
>> restart n-sch before n-api, that would still work, or the contrary would
>> be fine too)
>>  3/ rolling upgrade your n-cpus
>>
>> I think we would keep then the existing upgrade path and we would still
>> have the placement service be mandatory for Ocata.
>>
>> Thoughts ?
>> -Sylvain
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> 
> I don't like basing the n-sch decision on the service version of the
> computes, because the computes will keep trying to connect to the
> placement service until it's available, but not fail. That doesn't
> really mean that placement is new enough for the scheduler to use the
> 1.4 microversion.
> 
> So IMO we either charge forward as planned and make it clear in the docs
> that for Ocata, the placement service must be upgraded *before*
> nova-scheduler, or we punt and provide a fallback to just pulling all
> compute nodes from the database if we can't make the 1.4 request to
> placement. Given my original post here, I'd prefer to charge forward
> unless it becomes clear that is not going to work, or is at least going
> to be very painful.
> 

Given the very short term for cycle-trailing projects [1] deadline which
is R+2 [2], that would mean a charge forward for asking to modify their
deployments would have to be done by the next 3 weeks (even less given
that we haven't yet agreed and haven't yet provided the documentation).
That would like a very short time for them and a fire drill then.

I'd prefer to see a possibility to rather accept the placement service
to be Newton. If you don't agree with verifying the compute node
versions, why not maybe just accepting to fallback calling the database

Re: [openstack-dev] [openstack-ansible] Ocata deployed on CentOS 7!

2017-01-19 Thread Hao Ran HR Hu
Thanks for sharing this. Great news!
Hao Ran Hu (Vern)
Software Developer
Phone: 86-21-60928179
E-mail: huhao...@cn.ibm.com
 
 
- Original message -From: Adam Heczko To: "OpenStack Development Mailing List (not for usage questions)" Cc:Subject: Re: [openstack-dev] [openstack-ansible] Ocata deployed on CentOS 7!Date: Fri, Jan 20, 2017 12:44 AM 
Major, thanks for sharing this!
 
 
On Thu, Jan 19, 2017 at 5:24 PM, Major Hayden  wrote:

On 01/19/2017 10:04 AM, Adam Heczko wrote:> BTW are you implying that Ubuntu LTS is unstable or not stable enough to run OpenStack?> I think that it would be valuable if you could share more details in this regard, point to Ubuntu specific bugs etc.Hey Adam,One of the bigger issues (as Ian noted) is a performance regression[0] that seems to impact Ansible[1] heavily. That one is being worked now.I have a scratch sheet of some things that are broken in 16.04.1 that I still need to open bugs for:  * Xenial installer fails if server is UEFI capable, but    the installer is run in legacy mode  * 14.04 to 16.04 upgrades on UEFI capable servers fail if    14.04 was installed in legacy/BIOS mode  * systemd-networkd 229 has a bug where bridges can't have a    VLAN interface attached  * Kernel panics on Dell PowerEdge R710 when the server is fairly    loaded with LXC containersI'm still working on reducing some of these bugs down into something tangible but I hope to do that soon.[0] https://bugs.launchpad.net/ubuntu/+source/python2.7/+bug/1638695[1] https://bugs.launchpad.net/openstack-ansible/+bug/1637494--Major Hayden
__OpenStack Development Mailing List (not for usage questions)Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev 

 --

Adam Heczko
Security Engineer @ Mirantis Inc.
__OpenStack Development Mailing List (not for usage questions)Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Ocata deployed on CentOS 7!

2017-01-19 Thread Adam Heczko
Major, thanks for sharing this!


On Thu, Jan 19, 2017 at 5:24 PM, Major Hayden  wrote:

> On 01/19/2017 10:04 AM, Adam Heczko wrote:
> > BTW are you implying that Ubuntu LTS is unstable or not stable enough to
> run OpenStack?
> > I think that it would be valuable if you could share more details in
> this regard, point to Ubuntu specific bugs etc.
>
> Hey Adam,
>
> One of the bigger issues (as Ian noted) is a performance regression[0]
> that seems to impact Ansible[1] heavily. That one is being worked now.
>
> I have a scratch sheet of some things that are broken in 16.04.1 that I
> still need to open bugs for:
>
>   * Xenial installer fails if server is UEFI capable, but
> the installer is run in legacy mode
>
>   * 14.04 to 16.04 upgrades on UEFI capable servers fail if
> 14.04 was installed in legacy/BIOS mode
>
>   * systemd-networkd 229 has a bug where bridges can't have a
> VLAN interface attached
>
>   * Kernel panics on Dell PowerEdge R710 when the server is fairly
> loaded with LXC containers
>
> I'm still working on reducing some of these bugs down into something
> tangible but I hope to do that soon.
>
> [0] https://bugs.launchpad.net/ubuntu/+source/python2.7/+bug/1638695
> [1] https://bugs.launchpad.net/openstack-ansible/+bug/1637494
>
> --
> Major Hayden
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Adam Heczko
Security Engineer @ Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] webob 1.7

2017-01-19 Thread Corey Bryant
On Thu, Jan 19, 2017 at 10:46 AM, Ian Cordasco 
wrote:

> -Original Message-
> From: Corey Bryant 
> Reply: OpenStack Development Mailing List (not for usage questions)
> 
> Date: January 19, 2017 at 08:52:25
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject:  Re: [openstack-dev] [keystone] webob 1.7
>
> > On Wed, Jan 18, 2017 at 9:08 AM, Ian Cordasco
> > wrote:
> >
> > > -Original Message-
> > > From: Chuck Short
> > > Reply: OpenStack Development Mailing List (not for usage questions)
> > >
> > > Date: January 18, 2017 at 08:01:46
> > > To: OpenStack Development Mailing List
> > > Subject: [openstack-dev] [keystone] webob 1.7
> > >
> > > > Hi
> > > >
> > > > We have been expericing problems with newer versions of webob (webob
> > > 1.7).
> > > > Reading the changelog, it seems that the upstream developers have
> > > > introduced some backwards incompatibility with previous versions of
> webob
> > > > that seems to be hitting keystone and possibly other projects as well
> > > > (nova/glance in particular). For keystone this bug has been reported
> in
> > > bug
> > > > #1657452. I would just like to get more developer's eyes on this
> > > particular
> > > > issue and possibly get a fix. I suspect its starting to hit other
> distros
> > > > as well or already have hit.
> > >
> > > Hey Chuck,
> > >
> > > This is also affecting Glance
> > > (https://bugs.launchpad.net/glance/+bug/1657459). I suspect what we'll
> > > do for now is blacklist the 1.7.x releases in openstack/requirements.
> > > It seems a bit late in the cycle to bump the minimum version to 1.7.0
> > > so we can safely fix this without having to deal with
> > > incompatibilities between versions.
> > >
> > > --
> > > Ian Cordasco
> > >
> > > 
> __
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> > Hi Ian,
> >
> > Were you suggesting there's a new version of webob in the works that
> fixes
> > this so we could bump upper-constraints and blacklist 1.7.x?
>
> No. I was suggesting that OpenStack not try to work with the 1.7
> series of WebOb.
>
>
Ok


> > Unfortunately at this point we're at webob 1.7.0 in Ubuntu and there's no
> > going backward for us. The corresponding bugs were already mentioned in
> > this thread but worth noting again, these are the bugs tracking this:
> >
> > https://bugs.launchpad.net/nova/+bug/1657452
> > https://bugs.launchpad.net/glance/+bug/1657459
> >
> > So far this affects nova, glance, and keystone (David has a patch in
> review
> > - https://review.openstack.org/#/c/422234/).
>
> I'll have to see if we can get that prioritized for Glance next week
> as a bug fix candidate post Ocata-3. We decided our priorities for the
> next week just a short while ago. I'm going to see if we can move it
> onto this week's list though.
>
>
Thanks, that would be great.

-- 
Regards,
Corey
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Ocata deployed on CentOS 7!

2017-01-19 Thread Major Hayden
On 01/19/2017 10:19 AM, Ian Cordasco wrote:
>> I believe this is more about supporting folks who want to run on
>> Centos/RHEL, rather than a step to removing Ubuntu support.
> That's also correct. OpenStack-Ansible is attempting to support
> multiple distros at the same time. =)

Correct!  The Ocata release of OpenStack-Ansible will certainly support Ubuntu 
16.04 as the primary OS, but there is a subset of us who are trying to get it 
working well on CentOS 7 as well. ;)

--
Major Hayden

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Order of n-api (placement) and n-sch upgrades for Ocata

2017-01-19 Thread Alex Schultz
On Thu, Jan 19, 2017 at 8:27 AM, Matt Riedemann
 wrote:
> Sylvain and I were talking about how he's going to work placement
> microversion requests into his filter scheduler patch [1]. He needs to make
> requests to the placement API with microversion 1.4 [2] or later for
> resource provider filtering on specific resource classes like VCPU and
> MEMORY_MB.
>
> The question was what happens if microversion 1.4 isn't available in the
> placement API, i.e. the nova-scheduler is running Ocata code now but the
> placement service is running Newton still.
>
> Our rolling upgrades doc [3] says:
>
> "It is safest to start nova-conductor first and nova-api last."
>
> But since placement is bundled with n-api that would cause issues since
> n-sch now depends on the n-api code.
>
> If you package the placement service separately from the nova-api service
> then this is probably not an issue. You can still roll out n-api last and
> restart it last (for control services), and just make sure that placement is
> upgraded before nova-scheduler (we need to be clear about that in [3]).
>
> But do we have any other issues if they are not packaged separately? Is it
> possible to install the new code, but still only restart the placement
> service before nova-api? I believe it is, but want to ask this out loud.
>

Forgive me as I haven't looked really in depth, but if the api and
placement api are both collocated in the same apache instance this is
not necessarily the simplest thing to achieve.  While, yes it could be
achieved it will require more manual intervention of custom upgrade
scripts. To me this is not a good idea. My personal preference (now
having dealt with multiple N->O nova related acrobatics) is that these
types of requirements not be made.  We've already run into these
assumptions for new installs as well specifically in this newer code.
Why can't we turn all the services on and they properly enter a wait
state until such conditions are satisfied?

Thanks,
-Alex

> I think we're probably OK here but I wanted to ask this out loud and make
> sure everyone is aware and can think about this as we're a week from feature
> freeze. We also need to look into devstack/grenade because I'm fairly
> certain that we upgrade n-sch *before* placement in a grenade run which will
> make any issues here very obvious in [1].
>
> [1] https://review.openstack.org/#/c/417961/
> [2]
> http://docs.openstack.org/developer/nova/placement.html#filter-resource-providers-having-requested-resource-capacity
> [3]
> http://docs.openstack.org/developer/nova/upgrade.html#rolling-upgrade-process
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Ocata deployed on CentOS 7!

2017-01-19 Thread Major Hayden
On 01/19/2017 10:04 AM, Adam Heczko wrote:
> BTW are you implying that Ubuntu LTS is unstable or not stable enough to run 
> OpenStack?
> I think that it would be valuable if you could share more details in this 
> regard, point to Ubuntu specific bugs etc.

Hey Adam,

One of the bigger issues (as Ian noted) is a performance regression[0] that 
seems to impact Ansible[1] heavily. That one is being worked now.

I have a scratch sheet of some things that are broken in 16.04.1 that I still 
need to open bugs for:

  * Xenial installer fails if server is UEFI capable, but
the installer is run in legacy mode

  * 14.04 to 16.04 upgrades on UEFI capable servers fail if
14.04 was installed in legacy/BIOS mode

  * systemd-networkd 229 has a bug where bridges can't have a
VLAN interface attached

  * Kernel panics on Dell PowerEdge R710 when the server is fairly
loaded with LXC containers

I'm still working on reducing some of these bugs down into something tangible 
but I hope to do that soon.

[0] https://bugs.launchpad.net/ubuntu/+source/python2.7/+bug/1638695
[1] https://bugs.launchpad.net/openstack-ansible/+bug/1637494
 
--
Major Hayden

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Ocata deployed on CentOS 7!

2017-01-19 Thread Ian Cordasco
-Original Message-
From: Monty Taylor 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: January 19, 2017 at 10:22:23
To: openstack-dev@lists.openstack.org 
Subject:  Re: [openstack-dev] [openstack-ansible] Ocata deployed on CentOS 7!

> On 01/19/2017 10:14 AM, Ian Cordasco wrote:
> > -Original Message-
> > From: Adam Heczko
> > Reply: OpenStack Development Mailing List (not for usage questions)
> >
> > Date: January 19, 2017 at 10:06:14
> > To: OpenStack Development Mailing List (not for usage questions)
> >
> > Subject: Re: [openstack-dev] [openstack-ansible] Ocata deployed on CentOS 7!
> >
> >> Hi Major, great news indeed.
> >> BTW are you implying that Ubuntu LTS is unstable or not stable enough to
> >> run OpenStack?
> >> I think that it would be valuable if you could share more details in this
> >> regard, point to Ubuntu specific bugs etc.
> >> Thanks.
> >
> > I think Major may be referring to some serious performance regressions
> > found in Ubuntu 16.04. The default system Python is significantly
> > slower by default. I know he's working with upstream developers to fix
> > it, but it was problematic for OSA.
>
> /me learns things and also will keep his mouth shut next time :)

You were right that this was part of an effort towards multi-distro
support though!

--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Ocata deployed on CentOS 7!

2017-01-19 Thread Monty Taylor
On 01/19/2017 10:14 AM, Ian Cordasco wrote:
> -Original Message-
> From: Adam Heczko 
> Reply: OpenStack Development Mailing List (not for usage questions)
> 
> Date: January 19, 2017 at 10:06:14
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject:  Re: [openstack-dev] [openstack-ansible] Ocata deployed on CentOS 7!
> 
>> Hi Major, great news indeed.
>> BTW are you implying that Ubuntu LTS is unstable or not stable enough to
>> run OpenStack?
>> I think that it would be valuable if you could share more details in this
>> regard, point to Ubuntu specific bugs etc.
>> Thanks.
> 
> I think Major may be referring to some serious performance regressions
> found in Ubuntu 16.04. The default system Python is significantly
> slower by default. I know he's working with upstream developers to fix
> it, but it was problematic for OSA.

/me learns things and also will keep his mouth shut next time :)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][mitmstack] initial member of mitmstack groups

2017-01-19 Thread Yujun Zhang
Hi, Infra team,

Could you please help add me  as initial member in
mitmstack-core 
and mitmstack-release
? Thank you.

--
Yujun
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Ocata deployed on CentOS 7!

2017-01-19 Thread Ian Cordasco
-Original Message-
From: Monty Taylor 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: January 19, 2017 at 10:18:20
To: openstack-dev@lists.openstack.org 
Subject:  Re: [openstack-dev] [openstack-ansible] Ocata deployed on CentOS 7!

> On 01/19/2017 10:04 AM, Adam Heczko wrote:
> > Hi Major, great news indeed.
> > BTW are you implying that Ubuntu LTS is unstable or not stable enough to
> > run OpenStack?
> > I think that it would be valuable if you could share more details in
> > this regard, point to Ubuntu specific bugs etc.
>
> I believe this is more about supporting folks who want to run on
> Centos/RHEL, rather than a step to removing Ubuntu support.

That's also correct. OpenStack-Ansible is attempting to support
multiple distros at the same time. =)

--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Ocata deployed on CentOS 7!

2017-01-19 Thread Monty Taylor
On 01/19/2017 10:04 AM, Adam Heczko wrote:
> Hi Major, great news indeed.
> BTW are you implying that Ubuntu LTS is unstable or not stable enough to
> run OpenStack?
> I think that it would be valuable if you could share more details in
> this regard, point to Ubuntu specific bugs etc.

I believe this is more about supporting folks who want to run on
Centos/RHEL, rather than a step to removing Ubuntu support.

> Thanks.
> 
> On Thu, Jan 19, 2017 at 2:58 PM, Sean M. Collins  > wrote:
> 
> That is great news! Congrats!
> 
> --
> Sean M. Collins
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> 
> -- 
> Adam Heczko
> Security Engineer @ Mirantis Inc.
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Ocata deployed on CentOS 7!

2017-01-19 Thread Ian Cordasco
-Original Message-
From: Adam Heczko 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: January 19, 2017 at 10:06:14
To: OpenStack Development Mailing List (not for usage questions)

Subject:  Re: [openstack-dev] [openstack-ansible] Ocata deployed on CentOS 7!

> Hi Major, great news indeed.
> BTW are you implying that Ubuntu LTS is unstable or not stable enough to
> run OpenStack?
> I think that it would be valuable if you could share more details in this
> regard, point to Ubuntu specific bugs etc.
> Thanks.

I think Major may be referring to some serious performance regressions
found in Ubuntu 16.04. The default system Python is significantly
slower by default. I know he's working with upstream developers to fix
it, but it was problematic for OSA.

--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Ocata deployed on CentOS 7!

2017-01-19 Thread Adam Heczko
Hi Major, great news indeed.
BTW are you implying that Ubuntu LTS is unstable or not stable enough to
run OpenStack?
I think that it would be valuable if you could share more details in this
regard, point to Ubuntu specific bugs etc.
Thanks.

On Thu, Jan 19, 2017 at 2:58 PM, Sean M. Collins  wrote:

> That is great news! Congrats!
>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Adam Heczko
Security Engineer @ Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [fuel] Weekly meeting Jan 19 is canceled

2017-01-19 Thread Alexey Shtokolov
Nothing is on the agenda [0] this week, so I'm calling to cancel the meeting.
If you have anything to discuss please come chat in #fuel or add it to the
agenda to discuss next week.

[0] https://etherpad.openstack.org/p/fuel-weekly-meeting-agenda

---
WBR, Alexey Shtokolov
OpenStack Fuel PTL
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Order of n-api (placement) and n-sch upgrades for Ocata

2017-01-19 Thread Matt Riedemann

On 1/19/2017 9:43 AM, Sylvain Bauza wrote:



Le 19/01/2017 16:27, Matt Riedemann a écrit :

Sylvain and I were talking about how he's going to work placement
microversion requests into his filter scheduler patch [1]. He needs to
make requests to the placement API with microversion 1.4 [2] or later
for resource provider filtering on specific resource classes like VCPU
and MEMORY_MB.

The question was what happens if microversion 1.4 isn't available in the
placement API, i.e. the nova-scheduler is running Ocata code now but the
placement service is running Newton still.

Our rolling upgrades doc [3] says:

"It is safest to start nova-conductor first and nova-api last."

But since placement is bundled with n-api that would cause issues since
n-sch now depends on the n-api code.

If you package the placement service separately from the nova-api
service then this is probably not an issue. You can still roll out n-api
last and restart it last (for control services), and just make sure that
placement is upgraded before nova-scheduler (we need to be clear about
that in [3]).

But do we have any other issues if they are not packaged separately? Is
it possible to install the new code, but still only restart the
placement service before nova-api? I believe it is, but want to ask this
out loud.

I think we're probably OK here but I wanted to ask this out loud and
make sure everyone is aware and can think about this as we're a week
from feature freeze. We also need to look into devstack/grenade because
I'm fairly certain that we upgrade n-sch *before* placement in a grenade
run which will make any issues here very obvious in [1].

[1] https://review.openstack.org/#/c/417961/
[2]
http://docs.openstack.org/developer/nova/placement.html#filter-resource-providers-having-requested-resource-capacity

[3]
http://docs.openstack.org/developer/nova/upgrade.html#rolling-upgrade-process




I thought out loud in the nova channel at the following possibility :
since we always ask to upgrade n-cpus *AFTER* upgrading our other
services, we could imagine to allow the nova-scheduler gently accept to
have a placement service be Newton *UNLESS* you have Ocata computes.

On other technical words, the scheduler getting a response from the
placement service is an hard requirement for Ocata. That said, if the
response code is a 400 with a message saying that the schema is
incorrect, it would be checking the max version of all the computes and
then :
 - either the max version is Newton and then call back the
ComputeNodeList.get_all() for getting the list of nodes
 - or, the max version is Ocata (at least one node is upgraded), and
then we would throw a NoValidHosts

That way, the upgrade path would be :
 1/ upgrade your conductor
 2/ upgrade all your other services but n-cpus (we could upgrade and
restart n-sch before n-api, that would still work, or the contrary would
be fine too)
 3/ rolling upgrade your n-cpus

I think we would keep then the existing upgrade path and we would still
have the placement service be mandatory for Ocata.

Thoughts ?
-Sylvain

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



I don't like basing the n-sch decision on the service version of the 
computes, because the computes will keep trying to connect to the 
placement service until it's available, but not fail. That doesn't 
really mean that placement is new enough for the scheduler to use the 
1.4 microversion.


So IMO we either charge forward as planned and make it clear in the docs 
that for Ocata, the placement service must be upgraded *before* 
nova-scheduler, or we punt and provide a fallback to just pulling all 
compute nodes from the database if we can't make the 1.4 request to 
placement. Given my original post here, I'd prefer to charge forward 
unless it becomes clear that is not going to work, or is at least going 
to be very painful.


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] webob 1.7

2017-01-19 Thread Ian Cordasco
-Original Message-
From: Corey Bryant 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: January 19, 2017 at 08:52:25
To: OpenStack Development Mailing List (not for usage questions)

Subject:  Re: [openstack-dev] [keystone] webob 1.7

> On Wed, Jan 18, 2017 at 9:08 AM, Ian Cordasco
> wrote:
>
> > -Original Message-
> > From: Chuck Short
> > Reply: OpenStack Development Mailing List (not for usage questions)
> >
> > Date: January 18, 2017 at 08:01:46
> > To: OpenStack Development Mailing List
> > Subject: [openstack-dev] [keystone] webob 1.7
> >
> > > Hi
> > >
> > > We have been expericing problems with newer versions of webob (webob
> > 1.7).
> > > Reading the changelog, it seems that the upstream developers have
> > > introduced some backwards incompatibility with previous versions of webob
> > > that seems to be hitting keystone and possibly other projects as well
> > > (nova/glance in particular). For keystone this bug has been reported in
> > bug
> > > #1657452. I would just like to get more developer's eyes on this
> > particular
> > > issue and possibly get a fix. I suspect its starting to hit other distros
> > > as well or already have hit.
> >
> > Hey Chuck,
> >
> > This is also affecting Glance
> > (https://bugs.launchpad.net/glance/+bug/1657459). I suspect what we'll
> > do for now is blacklist the 1.7.x releases in openstack/requirements.
> > It seems a bit late in the cycle to bump the minimum version to 1.7.0
> > so we can safely fix this without having to deal with
> > incompatibilities between versions.
> >
> > --
> > Ian Cordasco
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> Hi Ian,
>
> Were you suggesting there's a new version of webob in the works that fixes
> this so we could bump upper-constraints and blacklist 1.7.x?

No. I was suggesting that OpenStack not try to work with the 1.7
series of WebOb.

> Unfortunately at this point we're at webob 1.7.0 in Ubuntu and there's no
> going backward for us. The corresponding bugs were already mentioned in
> this thread but worth noting again, these are the bugs tracking this:
>
> https://bugs.launchpad.net/nova/+bug/1657452
> https://bugs.launchpad.net/glance/+bug/1657459
>
> So far this affects nova, glance, and keystone (David has a patch in review
> - https://review.openstack.org/#/c/422234/).

I'll have to see if we can get that prioritized for Glance next week
as a bug fix candidate post Ocata-3. We decided our priorities for the
next week just a short while ago. I'm going to see if we can move it
onto this week's list though.

Cheers,
--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Order of n-api (placement) and n-sch upgrades for Ocata

2017-01-19 Thread Sylvain Bauza


Le 19/01/2017 16:27, Matt Riedemann a écrit :
> Sylvain and I were talking about how he's going to work placement
> microversion requests into his filter scheduler patch [1]. He needs to
> make requests to the placement API with microversion 1.4 [2] or later
> for resource provider filtering on specific resource classes like VCPU
> and MEMORY_MB.
> 
> The question was what happens if microversion 1.4 isn't available in the
> placement API, i.e. the nova-scheduler is running Ocata code now but the
> placement service is running Newton still.
> 
> Our rolling upgrades doc [3] says:
> 
> "It is safest to start nova-conductor first and nova-api last."
> 
> But since placement is bundled with n-api that would cause issues since
> n-sch now depends on the n-api code.
> 
> If you package the placement service separately from the nova-api
> service then this is probably not an issue. You can still roll out n-api
> last and restart it last (for control services), and just make sure that
> placement is upgraded before nova-scheduler (we need to be clear about
> that in [3]).
> 
> But do we have any other issues if they are not packaged separately? Is
> it possible to install the new code, but still only restart the
> placement service before nova-api? I believe it is, but want to ask this
> out loud.
> 
> I think we're probably OK here but I wanted to ask this out loud and
> make sure everyone is aware and can think about this as we're a week
> from feature freeze. We also need to look into devstack/grenade because
> I'm fairly certain that we upgrade n-sch *before* placement in a grenade
> run which will make any issues here very obvious in [1].
> 
> [1] https://review.openstack.org/#/c/417961/
> [2]
> http://docs.openstack.org/developer/nova/placement.html#filter-resource-providers-having-requested-resource-capacity
> 
> [3]
> http://docs.openstack.org/developer/nova/upgrade.html#rolling-upgrade-process
> 
> 

I thought out loud in the nova channel at the following possibility :
since we always ask to upgrade n-cpus *AFTER* upgrading our other
services, we could imagine to allow the nova-scheduler gently accept to
have a placement service be Newton *UNLESS* you have Ocata computes.

On other technical words, the scheduler getting a response from the
placement service is an hard requirement for Ocata. That said, if the
response code is a 400 with a message saying that the schema is
incorrect, it would be checking the max version of all the computes and
then :
 - either the max version is Newton and then call back the
ComputeNodeList.get_all() for getting the list of nodes
 - or, the max version is Ocata (at least one node is upgraded), and
then we would throw a NoValidHosts

That way, the upgrade path would be :
 1/ upgrade your conductor
 2/ upgrade all your other services but n-cpus (we could upgrade and
restart n-sch before n-api, that would still work, or the contrary would
be fine too)
 3/ rolling upgrade your n-cpus

I think we would keep then the existing upgrade path and we would still
have the placement service be mandatory for Ocata.

Thoughts ?
-Sylvain

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ui] FYI, the tripleo-ui package is currently broken

2017-01-19 Thread Honza Pokorny
Thanks for being on top of this, Julie.

Honza Pokorny

On 2017-01-19 15:17, Julie Pichon wrote:
> On 18 January 2017 at 11:35, Julie Pichon  wrote:
> > I'm sorry to report we're finding ourselves in the same situation
> > again - CI will fail on all the UI patches, please don't recheck until
> > we have a new dependencies package available.
> >
> > On the plus side, with the help of amoralej on #rdo we figured out why
> > this is happening: the tripleo-ui rpm used in CI is being built from
> > the master branch, instead of using the patch under review. So,
> > instead of happening on the patch itself the CI failures only happen
> > after it merges. I filed [1] to track this. Any pointer from folks
> > familiar with TripleO CI as to where we might want to poke to resolve
> > this is appreciated :)
> 
> We're back in business! Recheck away, merge all the patches :)
> 
> Also thanks to panda we have a fix for the "CI not testing the patch
> currently under review" issue merged so hopefully this time, we can
> avoid the same kind of problems with new dependencies.
> 
> Thanks,
> 
> Julie
> 
> > [1] https://bugs.launchpad.net/tripleo/+bug/1657416
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [OSSN-0074] Nova metadata service should not be used for sensitive information

2017-01-19 Thread Jeremy Stanley
On 2017-01-19 09:34:21 -0500 (-0500), Steve Gordon wrote:
[...]
> Does this configuration directive provide any mitigation for this
> issue?:
> 
> "use_forwarded_for = False (BoolOpt) Treat X-Forwarded-For
> as the canonical remote address. Only enable this if you have a
> sanitizing proxy."
> 
> Just given its name and stated purpose it seems conspicuous by its
> absence in this OSSN (that is, even if it provides no mitigation
> at all I would have expected to see that noted)?
[...]

I agree it's unfortunate this was omitted in the discussion. If you
follow the original bug report[*], it's only applicable to
environments which set use_forwarded_for = True. The report can be
reduced to the following summary: If you configure nova's metadata
service to rely on X-Forwarded-For (by setting use_forwarded_for =
True) so that you can put a proxy in front of it, then you need to
make sure your network is correctly designed such that untrusted
systems are not allowed to connect directly to the service without
going through your proxy (and also make sure your proxy correctly
rewrites any existing X-Forwarded-For headers it may receive rather
than passing them through untouched).

[*] https://launchpad.net/bugs/1563954
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [nova] Accessing instance.flavor.projects fails due to orphaned Flavor

2017-01-19 Thread Balazs Gibizer
On Fri, Jan 13, 2017 at 9:51 AM, Balazs Gibizer 
 wrote:



On Thu, Jan 12, 2017 at 4:56 PM, Jay Pipes  wrote:

On 01/12/2017 05:31 AM, Balazs Gibizer wrote:

Hi,

The flavor field of the Instance object is a lazy-loaded field and 
the
projects field of the Flavor object is also lazy-loaded. Now it 
seems to

me that when the Instance object lazy loads instance.flavor then the
created Flavor object is orphaned [1] therefore 
instance.flavor.projects
will never work and result in an exceptuion: OrphanedObjectError: 
Cannot

call _load_projects on orphaned Flavor object.

Is the Flavor left orphaned by intention or it is a bug?


Depends :) I would say it is intentional for the most part. Is there 
a reason why the Flavor *notification* payload needs to contain a 
list of projects associated with the flavor? My gut says that 
information isn't particularly germane to the relationship of the 
Instance to the Flavor?


The whole thing came up as part of the 
https://blueprints.launchpad.net/nova/+spec/flavor-notifications 
where the FlavorPayload was extended with flavor.projects. As the 
same FlavorPayload is used in the instance. notifications the 
instance notification code path also needs the flavor.projects field.





The payload of instance. notifications contains the flavor
related data of the instance in question and to have the 
flavor.projects
in the payload as well the code would need to access the projects 
field

via instance.flavor.projects.


Sure, I understand it would ease the access to the projects field in 
the notification payload packing, but is there really a reason to 
bother retrieving and sending that data each time an Instance 
notification event is made (which is quite often)?


So it is mainly there to have a single, consistent FlavorPayload used 
across notifications. Sure we could include only just the flavor_id 
in the instance. notifications. However there was a similar 
discussions how to handle delete notification [1]. There we decided 
to include the whole entity to the delete not just the uuid of the 
deleted entity. There the main reasoning (besides consistency) was 
that a notification consumer might want to listen only to certain 
notification but and still want to get enough information to avoid 
the need of a subsequent REST query. I think the same reasoning could 
be applied here.


Cheers,
gibi


Posting to openstack-dev as it was wrongly went to the openstack list.

Cheers,
gibi




[1] 
http://lists.openstack.org/pipermail/openstack-dev/2017-January/109508.html





Best,
-jay

___
Mailing list: 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Post to : openst...@lists.openstack.org
Unsubscribe : 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



___
Mailing list: 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

Post to : openst...@lists.openstack.org
Unsubscribe : 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Order of n-api (placement) and n-sch upgrades for Ocata

2017-01-19 Thread Matt Riedemann
Sylvain and I were talking about how he's going to work placement 
microversion requests into his filter scheduler patch [1]. He needs to 
make requests to the placement API with microversion 1.4 [2] or later 
for resource provider filtering on specific resource classes like VCPU 
and MEMORY_MB.


The question was what happens if microversion 1.4 isn't available in the 
placement API, i.e. the nova-scheduler is running Ocata code now but the 
placement service is running Newton still.


Our rolling upgrades doc [3] says:

"It is safest to start nova-conductor first and nova-api last."

But since placement is bundled with n-api that would cause issues since 
n-sch now depends on the n-api code.


If you package the placement service separately from the nova-api 
service then this is probably not an issue. You can still roll out n-api 
last and restart it last (for control services), and just make sure that 
placement is upgraded before nova-scheduler (we need to be clear about 
that in [3]).


But do we have any other issues if they are not packaged separately? Is 
it possible to install the new code, but still only restart the 
placement service before nova-api? I believe it is, but want to ask this 
out loud.


I think we're probably OK here but I wanted to ask this out loud and 
make sure everyone is aware and can think about this as we're a week 
from feature freeze. We also need to look into devstack/grenade because 
I'm fairly certain that we upgrade n-sch *before* placement in a grenade 
run which will make any issues here very obvious in [1].


[1] https://review.openstack.org/#/c/417961/
[2] 
http://docs.openstack.org/developer/nova/placement.html#filter-resource-providers-having-requested-resource-capacity
[3] 
http://docs.openstack.org/developer/nova/upgrade.html#rolling-upgrade-process


--

Thanks,

Matt Riedemann


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] "Snapshot-manage imports snapshots with wrong size"

2017-01-19 Thread Mykhailo Dovgal
Hi, folks.

Having worked on this bug [0] I could reproduce it with lvm as a backend.
And during it further investigation, I found the real problem that triggers
it. Here [1] after getting the real snapshot size, we are trying to update
'size' column. But actually snapshot's size is stored in cinder.snapshots
table in column that actually is called 'volume_size'.

So after operations that are described in bug description part of [0] we
will always get snapshot with the wrong size.
This problem has some side-effects. As an example, using lvm as a backend
we can't delete the snapshot with the bigger size than real and without
'volume_clear = none', because it will try to clear the bigger space that
it real can

There are at least two ways how to solve this problem:

1. Just change here [1] ['size'] -> ['volume_size'] to assure that
necessary database column will be updated and it will not trigger this
problem in future. And add some tests for it.

2. As the name of the column in snapshot table 'volume_size' is incorrect,
because it doesn't connected with the volume size, but with snapshot size,
we need to change db schema (add column that will be called 'size' instead
of 'volume_size' and in future remove 'volume_size' column) and add some
possibility for data migration. And of course a lot of tests for it.

The first variant is much easier, but I think, that the second variant is
much better, because we will get rid of code that is ambiguous and make our
database scheme clear.

I've started working on the second variant but I want to hear your opinion
about it. Maybe somebody knows that solution that is better that mine.
Looking forward to your answers.

[0] - *https://bugs.launchpad.net/cinder/+bug/1623596
*
[1] - https://github.com/openstack/cinder/blob/02389a1d2ac4822d3
7b1f7fbd29391097bfcb56f/cinder/volume/flows/manager/
manage_existing_snapshot.py#L241-L246
[2] - https://github.com/openstack/cinder/blob/
02389a1d2ac4822d37b1f7fbd29391097bfcb56f/cinder/volume/
flows/manager/manage_existing_snapshot.py#L241


Best regards.
Michael Dovgal,
mdovgal on chat.freenode.net
Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] 2017-1-11 policy meeting

2017-01-19 Thread Lance Bragstad
Ruan,

Good question! I should clarify that there would be no *default* policy
file to maintain in the project source code, like in keystone currently
[0]. All policy defaults would be coded into the project. Nova has already
taken this approach with their policy file [1], which leaves them with
nothing to maintain in tree (notice the absence of a sample policy file)
[2]. A deployer can still customize policy by using a policy.json file, and
those rules are treated as overrides for the defaults in code.

[0] https://github.com/openstack/keystone/blob/master/etc/policy.json
[1] https://github.com/openstack/nova/blob/master/nova/policies/servers.py
[2] https://github.com/openstack/nova/tree/master/etc/nova

On Thu, Jan 19, 2017 at 8:35 AM,  wrote:

> Hi Lance,
>
> Your option 3 is not clear for me.
>
> You say that ‘The result would 0 policy files to maintain in tree and
> everything would be in code.’ Without this file, how can we define
> policies? Can user configure policies?
>
> Ruan
>
>
>
> *From:* Lance Bragstad [mailto:lbrags...@gmail.com]
> *Sent:* mercredi 18 janvier 2017 23:16
> *To:* OpenStack Development Mailing List (not for usage questions);
> openstack-operat...@lists.openstack.org
> *Subject:* Re: [openstack-dev] [keystone] 2017-1-11 policy meeting
>
>
>
> Looping this into the operator's list, too!
>
>
>
> On Wed, Jan 18, 2017 at 2:13 PM, Lance Bragstad 
> wrote:
>
> Thanks to Morgan in today's policy meeting [0], we were able to shed some
> light on the reasons for keystone having two policy files. The main reason
> a second policy file was introduced was to recenter RBAC around concepts
> introduced in the V3 API. The problem was that the policy file that came
> later [1] wasn't a drop in replacement for the initial one because it
> required new roles in order to work properly. Switching to the newer policy
> file by default would break deployers who did nothing but implement the
> basic RBAC roles required by the initial version [2]. At the time there was
> no real way to "migrate" from one policy file to another, so two were
> maintained in tree.
>
>
>
> Consolidating to a single file, or set of defaults, has benefits for
> maintainers and deployers, so we covered paths to accomplish that. We were
> able to come up with three paths forward.
>
>1. Drop support for the original/initial policy file and only maintain
>policy.v3cloudsample.json
>2. Leverage `keystone-manage bootstrap` to create the new roles
>required by policy.v3cloudsample.json
>3. Codify the existing policy file using oslo.policy as a vehicle to
>introduce new defaults from policy.v3cloudsample.json
>
> Everyone seemed to agree the 1st option was the most painful for everyone.
> Option 2 (and maybe 3) would more than likely require some sort of upgrade
> documentation that describes the process.
>
>
>
> Without swaying anyone's opinion, I think I tend to lean towards option 3
> because it sounds similar to what nova has done, or is going to do. After
> talking to John Garbutt about some of their nova work, it sounded like one
> of their next steps was to re-evaluate all RBAC roles/rules now that they
> have them in code. If they come across an operation that would benefit from
> a different default value, they can use oslo.policy to deprecate or propose
> a new default (much like how we use oslo.config for changing or deprecating
> configuration values today). From a keystone perspective, this would
> effectively mean we would move what we have in policy.json into code, then
> do the same exercise with policy.v3cloudsample.json. The result would 0
> policy files to maintain in tree and everything would be in code. From
> there - we can work with other projects to standardize on what various
> roles mean across OpenStack (hopefully following some sort of guide or
> document).
>
>
>
> I'm excited to hear what others think of the current options, or if there
> is another path forward we missed.
>
>
>
>
>
> [0] http://eavesdrop.openstack.org/meetings/policy/
> 2017/policy.2017-01-18-16.00.log.html
>
> [1] https://github.com/openstack/keystone/blob/
> 7f2b7e58e74c79e5a09bd5c20e0de9c15d9eabd0/etc/policy.v3cloudsample.json
>
> [2] https://github.com/openstack/keystone/blob/
> 7f2b7e58e74c79e5a09bd5c20e0de9c15d9eabd0/etc/policy.json
>
>
>
> On Wed, Jan 11, 2017 at 11:28 AM, Lance Bragstad 
> wrote:
>
> Hey folks,
>
>
>
> In case you missed the policy meeting today, we had a good discussion [0]
> around incorporating keystone's policy into code using the Nova approach.
>
>
>
> Keystone is in a little bit of a unique position since we maintain two
> different policy files [1] [2], and there were a lot of questions around
> why we have two. This same topic came up in a recent keystone meeting, and
> we wanted to loop Henry Nash into the conversation, since I believe he
> spearheaded a lot of the original policy.v3cloudsample work.
>
>
>
> Let's see 

Re: [openstack-dev] [tripleo][ui] FYI, the tripleo-ui package is currently broken

2017-01-19 Thread Julie Pichon
On 18 January 2017 at 11:35, Julie Pichon  wrote:
> I'm sorry to report we're finding ourselves in the same situation
> again - CI will fail on all the UI patches, please don't recheck until
> we have a new dependencies package available.
>
> On the plus side, with the help of amoralej on #rdo we figured out why
> this is happening: the tripleo-ui rpm used in CI is being built from
> the master branch, instead of using the patch under review. So,
> instead of happening on the patch itself the CI failures only happen
> after it merges. I filed [1] to track this. Any pointer from folks
> familiar with TripleO CI as to where we might want to poke to resolve
> this is appreciated :)

We're back in business! Recheck away, merge all the patches :)

Also thanks to panda we have a fix for the "CI not testing the patch
currently under review" issue merged so hopefully this time, we can
avoid the same kind of problems with new dependencies.

Thanks,

Julie

> [1] https://bugs.launchpad.net/tripleo/+bug/1657416

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Ubuntu 14.04 support in Newton and on

2017-01-19 Thread Jeremy Stanley
On 2017-01-18 15:19:36 -0800 (-0800), Eric K wrote:
> Hi all, Is there any community-wide policy on how long we strive
> to maintain compatibility with Ubuntu 14.04? For example by
> avoiding relying on MySQL 5.7 features. I've had a hard time
> finding it on openstack.org and ML discussions. Thanks lots!

Years ago the TC (only a few months after they ceased to be the PPB)
agreed to the following:

OpenStack will target its development efforts to latest
Ubuntu/Fedora, but will not introduce any changes that would
make it impossible to run on the latest Ubuntu LTS or latest
RHEL.

http://lists.openstack.org/pipermail/openstack-dev/2012-December/004052.html

http://eavesdrop.openstack.org/meetings/tc/2013/tc.2013-01-08-20.02.log.html#l-7

You can also find it referenced in our requirements documentation:

http://docs.openstack.org/developer/requirements/#finding-distro-status

The upshot has basically been that whatever the "latest Ubuntu LTS"
was at the time the development cycle began is what we use for the
purposes of testing development leading up to a given release, and
is subsequently maintained for testing the resulting stable branches
from that release until our support end-of-life is reached. However,
the Newton release ended in an unfortunate situation...

During the Newton development cycle, the Infra team decided to
provide teams a means of gracefully migrating their testing from
Ubuntu 14.04 LTS to 16.04 LTS with the expectation that it would be
completed within one cycle. This did not happen in time for the
release, and so we wound up with some projects testing stable/newton
on 16.04 while others were testing on 14.04. Obviously we couldn't
leave things in that state indefinitely or it would risk breaking
some project dependencies entirely in that branch, so we pushed to
get any remaining teams to finish uplifting their stable/newton
testing to 16.04 soon thereafter.

The result is still that upstream OpenStack, from a QA/testing
perspective, considers stable/newton to "support" Ubuntu 16.04 LTS
("latest Ubuntu LTS" at the time its development cycle began), and
stable/mitaka is the last release we "supported" on Ubuntu 14.04
LTS. Hopefully that is the answer you're seeking?
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic][ptl] Announcing Ironic PTL candidacy for Pike

2017-01-19 Thread Julia Kreger
Greetings everyone!

I would like to nominate myself as a candidate for ironic PTL for the
Pike cycle. For those that don't know me, I'm TheJulia. I have been
working in the OpenStack community since mid-2014, and ironic since
early 2015. I'm likely best known for bifrost and advocating support
for stand-alone mode. A little less well known for helping provide
guidance for ironic-webclint and ironic-ui, plus the occassional bout
of insomnia.

You may have seen me as the quiet one in the corner, only to suddenly
speak up, and then to caveat my statement with "But I might be crazy".
Well, I think this is sufficient evidence!

To get back to the point, sometimes I do see things differently than other
people when presented with a problem. A good portion of that, at least as
far as I can tell, comes from my operations background. But I've learned
my lessons over the years and try to understand others point of view and
keep an open mind.

I am a strong believer in ironic and the work that the community undertakes,
due to the countless hours I've spent inside data center installing hardware.

I am confident that my leadership skills, ability to negotiate various
opinions, and passion for Ironic and the community that drives it will be my
strongest contributions as PTL. My goals for the Pike cycle would include:

* Greater power and management interface options such as Redfish and OpenBMC.
  I think this is necessary for growing ironic's user base, and while we have
  been talking about such interfaces for a while, I believe the core community
  needs to help make this happen.

* Greater support partitioning and alternative disk layouts. This feels like
  one of those asks that more advanced deployments keep trying to solve on
  their own inside whole disk images, and while they find bugs, it would be
  awesome to offer some greater flexibility.

* Location awareness is super important to providing a single view of the
  world to an operator, yet right now we have no support for it as we
  are limited to a single endpoint representing a distinct cluster of
  conductors. We need to fix this!

I look forward to working with the Ironic community to continue improvements
and representing our community within OpenStack

Thank you for your consideration,

Julia Kreger
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Fuel][Ci] New keyword for triggering deployment tests

2017-01-19 Thread Dmitry Kaiharodsev
Hi all,

please be informed that for re-triggering fuel deployment tests [1]
should be used Gerrit reply with keyword 'fuel: redeploy' [2]

By implementing this change, we're separating heavy and time-consuming
deployment tests
from other tests, which gives us much more flexible re-triggering process.

[1] http://paste.openstack.org/show/595631/
[2] without quotes
-- 
Kind Regards,
Dmitry Kaigarodtsev
IRC: dkaiharodsev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] webob 1.7

2017-01-19 Thread Corey Bryant
On Wed, Jan 18, 2017 at 9:08 AM, Ian Cordasco 
wrote:

> -Original Message-
> From: Chuck Short 
> Reply: OpenStack Development Mailing List (not for usage questions)
> 
> Date: January 18, 2017 at 08:01:46
> To: OpenStack Development Mailing List 
> Subject:  [openstack-dev] [keystone] webob 1.7
>
> > Hi
> >
> > We have been expericing problems with newer versions of webob (webob
> 1.7).
> > Reading the changelog, it seems that the upstream developers have
> > introduced some backwards incompatibility with previous versions of webob
> > that seems to be hitting keystone and possibly other projects as well
> > (nova/glance in particular). For keystone this bug has been reported in
> bug
> > #1657452. I would just like to get more developer's eyes on this
> particular
> > issue and possibly get a fix. I suspect its starting to hit other distros
> > as well or already have hit.
>
> Hey Chuck,
>
> This is also affecting Glance
> (https://bugs.launchpad.net/glance/+bug/1657459). I suspect what we'll
> do for now is blacklist the 1.7.x releases in openstack/requirements.
> It seems a bit late in the cycle to bump the minimum version to 1.7.0
> so we can safely fix this without having to deal with
> incompatibilities between versions.
>
> --
> Ian Cordasco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

Hi Ian,

Were you suggesting there's a new version of webob in the works that fixes
this so we could bump upper-constraints and blacklist 1.7.x?

Unfortunately at this point we're at webob 1.7.0 in Ubuntu and there's no
going backward for us.  The corresponding bugs were already mentioned in
this thread but worth noting again, these are the bugs tracking this:

https://bugs.launchpad.net/nova/+bug/1657452
https://bugs.launchpad.net/glance/+bug/1657459

So far this affects nova, glance, and keystone (David has a patch in review
- https://review.openstack.org/#/c/422234/).

-- 
Regards,
Corey
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] 2017-1-11 policy meeting

2017-01-19 Thread ruan.he
Hi Lance,
Your option 3 is not clear for me.
You say that ‘The result would 0 policy files to maintain in tree and 
everything would be in code.’ Without this file, how can we define policies? 
Can user configure policies?
Ruan

From: Lance Bragstad [mailto:lbrags...@gmail.com]
Sent: mercredi 18 janvier 2017 23:16
To: OpenStack Development Mailing List (not for usage questions); 
openstack-operat...@lists.openstack.org
Subject: Re: [openstack-dev] [keystone] 2017-1-11 policy meeting

Looping this into the operator's list, too!

On Wed, Jan 18, 2017 at 2:13 PM, Lance Bragstad 
> wrote:
Thanks to Morgan in today's policy meeting [0], we were able to shed some light 
on the reasons for keystone having two policy files. The main reason a second 
policy file was introduced was to recenter RBAC around concepts introduced in 
the V3 API. The problem was that the policy file that came later [1] wasn't a 
drop in replacement for the initial one because it required new roles in order 
to work properly. Switching to the newer policy file by default would break 
deployers who did nothing but implement the basic RBAC roles required by the 
initial version [2]. At the time there was no real way to "migrate" from one 
policy file to another, so two were maintained in tree.

Consolidating to a single file, or set of defaults, has benefits for 
maintainers and deployers, so we covered paths to accomplish that. We were able 
to come up with three paths forward.

  1.  Drop support for the original/initial policy file and only maintain 
policy.v3cloudsample.json
  2.  Leverage `keystone-manage bootstrap` to create the new roles required by 
policy.v3cloudsample.json
  3.  Codify the existing policy file using oslo.policy as a vehicle to 
introduce new defaults from policy.v3cloudsample.json
Everyone seemed to agree the 1st option was the most painful for everyone. 
Option 2 (and maybe 3) would more than likely require some sort of upgrade 
documentation that describes the process.

Without swaying anyone's opinion, I think I tend to lean towards option 3 
because it sounds similar to what nova has done, or is going to do. After 
talking to John Garbutt about some of their nova work, it sounded like one of 
their next steps was to re-evaluate all RBAC roles/rules now that they have 
them in code. If they come across an operation that would benefit from a 
different default value, they can use oslo.policy to deprecate or propose a new 
default (much like how we use oslo.config for changing or deprecating 
configuration values today). From a keystone perspective, this would 
effectively mean we would move what we have in policy.json into code, then do 
the same exercise with policy.v3cloudsample.json. The result would 0 policy 
files to maintain in tree and everything would be in code. From there - we can 
work with other projects to standardize on what various roles mean across 
OpenStack (hopefully following some sort of guide or document).

I'm excited to hear what others think of the current options, or if there is 
another path forward we missed.


[0] 
http://eavesdrop.openstack.org/meetings/policy/2017/policy.2017-01-18-16.00.log.html
[1] 
https://github.com/openstack/keystone/blob/7f2b7e58e74c79e5a09bd5c20e0de9c15d9eabd0/etc/policy.v3cloudsample.json
[2] 
https://github.com/openstack/keystone/blob/7f2b7e58e74c79e5a09bd5c20e0de9c15d9eabd0/etc/policy.json

On Wed, Jan 11, 2017 at 11:28 AM, Lance Bragstad 
> wrote:
Hey folks,

In case you missed the policy meeting today, we had a good discussion [0] 
around incorporating keystone's policy into code using the Nova approach.

Keystone is in a little bit of a unique position since we maintain two 
different policy files [1] [2], and there were a lot of questions around why we 
have two. This same topic came up in a recent keystone meeting, and we wanted 
to loop Henry Nash into the conversation, since I believe he spearheaded a lot 
of the original policy.v3cloudsample work.

Let's see if we can air out some of that tribal knowledge and answer a couple 
questions.

What was the main initiative for introducing policy.v3cloudsample.json?

Is it possible to consolidate the two?


[0] 
http://eavesdrop.openstack.org/meetings/policy/2017/policy.2017-01-11-16.00.log.html
[1] 
https://github.com/openstack/keystone/blob/master/etc/policy.v3cloudsample.json
[2] https://github.com/openstack/keystone/blob/master/etc/policy.json



_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles 

Re: [openstack-dev] [Openstack] [OSSN-0074] Nova metadata service should not be used for sensitive information

2017-01-19 Thread Steve Gordon
- Original Message -
> From: "Luke Hinds" 
> To: openst...@lists.openstack.org, openstack-dev@lists.openstack.org
> Sent: Monday, December 19, 2016 4:26:24 AM
> Subject: [Openstack] [OSSN-0074] Nova metadata service should not be used for 
> sensitive information
> 
> OpenStack Security Note: 0074
> 
> Nova metadata service should not be used for sensitive information
> 
> ---
> 
> ### Summary ###
> A recent security report has highlighted how users may be using the
> metadata service to store security sensitive information.
> 
> The Nova metadata service should not be considered a secure repository
> of confidential information required by compute instances.
> 
> ### Affected Services / Software ###
> Nova, All Versions
> 
> ### Discussion ###
> A recent vulnerability report for Nova stated that the metadata service
> will obey the `X-Forwarded-For` HTTP header. This header is often
> supplied by proxies so that the end service can identify which IP the
> request originated from.
> 
> The Nova metadata service typically uses the source IP address of the
> incoming request to respond with the appropriate data for the compute
> instance making the request. This is a sort of weak authentication,
> designed to ensure that metadata for one tenant isn't accidentally
> provided to another.
> 
> If the request contains a `X-Forwarded-For` HTTP header then the
> metadata service will use that for the source authentication rather than
> the actual TCP/IP source.

Hi Luke,

Does this configuration directive provide any mitigation for this issue?:

"use_forwarded_for = False (BoolOpt) Treat X-Forwarded-For as the 
canonical remote address. Only enable this if you have a sanitizing proxy."

Just given its name and stated purpose it seems conspicuous by its absence in 
this OSSN (that is, even if it provides no mitigation at all I would have 
expected to see that noted)?

Thanks,

Steve

> An attacker with access to a compute instance in the cloud could send a
> request to the metadata service and include the `X-Forwarded-For` header
> in order to effectively spoof their source and cause the metadata
> service to provide information that should not have been provided to
> that instance.
> 
> Consider the following:
> Alice creates a compute instance. She places the root password for that
> instance in the metadata service. The instance is assigned a 10.1.2.2
> IP address. Alice believes that the root password for her instance is
> safe within the metadata service.
> 
> Alice retrieves metadata by running a command similar to:
> `curl http://169.254.169.254/latest/meta-data
> `
> this will retrieve any metadata stored for Alice's compute instance,
> which has an IP address of 10.1.2.2
> 
> Bob has a compute instance with IP address 10.1.9.9 however Bob wants
> access to the metadata for Alice's compute instance. If Bob runs a
> similar command to Alice, but includes a customer header as below, he
> will get access to all of Alice's metadata, including the root password
> she chose to store there:
> `curl -H "X-Forwarded-For:
> 10.1.2.2" http://169.254.169.254/latest/meta-data
> `
> 
> The Nova metadata service is a useful utility within OpenStack but
> clearly not intended as a strongly authenticated system for storing
> sensitive data such as private keys or passwords.
> 
> ### Recommended Actions ###
> The metadata service should not be used to store sensitive information.
> 
> The IP forwarding issue is not a defect of itself, it exists to allow
> the metadata service to provide IP addresses for instances that are
> behind a proxy as may be the case in more complex deployments.
> 
> Cloud users who have a requirement to store sensitive information that
> compute instances require for operation should instead look to the
> Config drive to provide this service. It's operation is much more
> tightly bound to individual compute instances.
> 
> Where use of config drive is not an option, operators should consider
> other mitigations such as placing a proxy in front of the metadata service
> which can filter out these sorts of malicious activities.
> 
> ### Contacts / References ###
> Author: Robert Clark, IBM
> This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0074
> Original LaunchPad Bug : https://bugs.launchpad.net/nova/+bug/1563954
> 
> Mailing List : [Security] tag on openstack-dev@lists.openstack.org
> OpenStack Security Group : https://launchpad.net/~openstack-ossg
> 
> Config Drive
> : http://docs.openstack.org/user-guide/cli-config-drive.html
> 
> 
> 
> ___
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openst...@lists.openstack.org
> Unsubscribe : 

[openstack-dev] [horizon] feature freeze exception request -- nova simple tenant usages api pagination

2017-01-19 Thread Radomir Dopieralski
I would like to request a feature freeze exception for the following patch:

https://review.openstack.org/#/c/410337

This patch adds support for retrieving the simple tenant usages from Nova
in chunks, and it is necessary for correct data given that related patches
have been already merged in Nova. Without
it, the data received will be truncated.

In order to actually use that patch, however, it is necessary to set the
Nova API version to at least
version 3.40. For this, it's necessary to also add this patch:

https://review.openstack.org/422642

However, that patch will not work, because of a bug in the VersionManager,
which for some reason
uses floating point numbers for specifying versions, and thus understands
2.40 as 2.4. To fix that, it
is also necessary to merge this patch:

https://review.openstack.org/#/c/410688

I would like to request an exception for all those three patches.

An alternative to this would be to finish and merge the microversion
support, and modify the first patch to make use of it. Then we would need
exceptions for those two patches.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC][Glance][Nova][TripleO][Heat][Mistral][Ironic][Murano] Glare

2017-01-19 Thread Doug Hellmann
Excerpts from Mikhail Fedosin's message of 2017-01-19 12:48:13 +0300:
> Hi Matt!
> 
> This should be discussed, for sure, but there is a lot of potential. In
> general, it depends on how far we are willing to go. In the minimum
> approximation we can seamlessly replace Glance with Glare and operators
> simply get additional features for versioning, validation (and conversion,
> if necessary) of their uploaded images on the fly, as well as support for
> storing files in different stores.
> 
> If we dig a little deeper, then Glare allows you to store multiple files in
> a single artifact, so we can create a new type (ec2_image) and define three
> blobs inside: ami, ari, aki, and upload all three as a single object. This
> will get rid of a large amount of legacy code and simplify the architecture
> of Nova. Plus Glare will control the integrity of such artifact.
> 
> The next step could be full support for OVF and other formats that require
> a large number of files. Here we can use artifact folders and put all the
> files there.
> "OpenStack Compute does not currently have support for OVF packages, so you
> will need to extract the image file(s) from an OVF package if you wish to
> use it with OpenStack."
> http://docs.openstack.org/image-guide/introduction.html
> 
> Finally, I notice that there are a few nasty bugs in Glance (you know what
> I mean), which make it extremely inconvenient for a number of deployments.

I don't actually know what you mean. Can you give more details? Are
you talking about the image upload API work?

> 
> On Wed, Jan 18, 2017 at 8:26 PM, Matt Riedemann 
> wrote:
> 
> > On 1/18/2017 10:54 AM, Mikhail Fedosin wrote:
> >
> >> Hello!
> >>
> >> In this letter I want to tell you the current status of Glare project
> >> and discuss its future development within the entire OpenStack community.
> >>
> >> In the beginning I have to say a few words about myself - my name is
> >> Mike and I am the PTL of Glare. Currently I work as a consultant at
> >> Nokia, where we're developing the service as a universal catalog of
> >> binary data. As I understand it right, Nokia has big plans for this
> >> service, Moshe Elisha can tell you more about them.
> >>
> >> And here I want to ask the community - how exactly Glare may be useful
> >> in OpenStack? Glare was developed as a repository for all possible data
> >> types, and it has many possible applications. For example, it's a
> >> storage of vm images for Nova. Currently Glance is used for this, but
> >> Glare has much more features and this transition is easy to implement.
> >> Then it's a storage of Tosca templates. We were discussing integration
> >> with Heat and storing templates and environments in Glare, also it may
> >> be interesting for TripleO project. Mistral will store its workflows in
> >> Glare, it has already been decided. I'm not sure if Murano project is
> >> still alive, but they already use Glare 0.1 from Glance repo and it will
> >> be removed soon (in Pike afaik), so they have no other options except to
> >> start using Glare v1. Finally there were rumors about storing torrent
> >> files from Ironic.
> >>
> >> Now let me briefly describe Glare features:
> >>
> >>  * Versioning of artifacts - each artifact has a version in SemVer
> >> format and you can sort and filter by this field.
> >>  * Multiblob support - there can be several files and folders per one
> >> artifact.
> >>  * The ease of creating new artifact types with oslo_versionedobjects
> >> framework.
> >>  * Fair immutability - no one can change artifact when it's active.
> >>  * Multistore support - each artifact type data may be stored in
> >> different storages: images may go to Swift; heat templates may be stored
> >> directly in sql-database; for Docker Contatiners you can use Ceph, if
> >> you want.
> >>  * Advanced sorting and filtering with various operators.
> >>  * Uploaded data validation and conversion with hooks - for example,
> >> Glare may check if uploaded file was a valid Tosca template and return
> >> Bad Request if it's not.
> >>
> >> If you're interested, I recorded several demos in asciinema, that
> >> describe how Glare works and present the most useful features. Another
> >> demo about uploading hooks will be recorded and published this week.
> >>
> >> So, please tell me what you think and recommend in what direction we
> >> should develop the project. Thanks in advance!
> >>
> >> Best,
> >> Mike
> >>
> >> Useful links:
> >> [1] Api documentation in rst format:
> >> https://etherpad.openstack.org/p/glare-api
> >> [2] Basic artifact workflow on devstack: https://asciinema.org/a/97985
> >> [3] Listing of artifacts: https://asciinema.org/a/97986
> >> [4] Creating your own artifact type with oslo_vo:
> >> https://asciinema.org/a/97987
> >> [5] Locations, Tags, Links and Folders in Glare:
> >> https://asciinema.org/a/99771
> >>
> >>
> >> 
> >> __
> 

Re: [openstack-dev] [neutron] "Setup firewall filters only for required ports" bug

2017-01-19 Thread Daniel Alvarez Sanchez
On Wed, Jan 18, 2017 at 10:45 PM, Bernard Cafarelli 
wrote:

> Hi neutrinos,
>
> I would like your feedback on the mentioned changeset in title[1]
> (yes, added since Liberty).
>
> With this patch, we (should) skip ports with
> port_security_enabled=False or with an empty list of security groups
> when processing added ports [2]. But we found multiple problems here
>
> * Ports create with port_security_enabled=False
>
> This is the original bug that started this mail: if the FORWARD
> iptables chain has a REJECT default policy/last rule, the traffic is
> still blocked[3]. There is also a launchpad bug with similar details
> [4]
> The problem here: these ports must not be skipped, as we add specific
> firewall rules to allow all traffic. These iptables rules have the
> following comment:
> "/* Accept all packets when port security is disabled. */"
>
> With the current code, any port created with port security will not
> have these rules (and updates do not work).
> I initially sent a patch to process these ports again [5], but there
> is more (as detailed by some in the launchpad bug)
>
> * Ports with no security groups, current code
>
> There is a bug in the  current agent code [6]: even with no security
> groups, the check will return true as, the security_groups key exists
> in the port details (with value "[]").
> So the port will not be skipped
>
> * Ports with no security groups, updated code
>
> Next step was to update checks (security groups list not empy, port
> security True or None), and test again. The port this time was
> skipped, but this showed up in openvswitch-agent.log:
> 2017-01-18 16:19:56.780 7458 INFO
> neutron.agent.linux.iptables_firewall
> [req-c49ca24f-1df8-40d7-8c48-6aab842ba34a - - - - -] Attempted to
> update port filter which is not filtered
> c2c58f8f-3b76-4c00-b792-f1726b28d2fc
> 2017-01-18 16:19:56.853 7458 INFO
> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
> [req-c49ca24f-1df8-40d7-8c48-6aab842ba34a - - - - -] Configuration for
> devices up [u'c2c58f8f-3b76-4c00-b792-f1726b28d2fc'] and devices down
> [] completed.
>
> Which is the kind of logs we saw in the first bug report. So as an
> additional test, I tried to update this port, adding a security group.
> New log entries:
> 2017-01-18 17:36:53.164 7458 INFO neutron.agent.securitygroups_rpc
> [req-c49ca24f-1df8-40d7-8c48-6aab842ba34a - - - - -] Refresh firewall
> rules
> 2017-01-18 17:36:55.873 7458 INFO
> neutron.agent.linux.iptables_firewall
> [req-c49ca24f-1df8-40d7-8c48-6aab842ba34a - - - - -] Attempted to
> update port filter which is not filtered
> 0f2eea88-0e6a-4ea9-819c-e26eb692cb25
> 2017-01-18 17:36:58.587 7458 INFO
> neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent
> [req-c49ca24f-1df8-40d7-8c48-6aab842ba34a - - - - -] Configuration for
> devices up [u'0f2eea88-0e6a-4ea9-819c-e26eb692cb25'] and devices down
> [] completed.
>
> And the iptables configuration did not change to show the newly allowed
> ports.
>
> So with a fixed check, wend up back in the same buggy situation as the
> first one.
>
> * Feedback
>
> So which course of action should we take? After checking these 3 cases
> out, I am in favour of reverting this commit entirely, as in its
> current state it does not help for ports without security groups, and
> breaks ports with port security disabled.
>
>
After having gone through the code and debugged the situation I'm also in
favor of reverting the patch. We should explicitly setup a rule which allows
traffic for that tap device exactly as we do when the port_security_enabled
is switched from True to False. We can't relay on traffic to be implicitly
allowed.

Also, on the tests side, should we add more tests only using create
> calls (port_security tests mostly update an existing port)? How to
> make sure these iptables rules are correctly applied (the ping tests
> are not enough, especially if the host system does not reject packets
> by default)?


Tests are incomplete so we should add either functional or fullstack/tempest
tests that validate these cases (ports created with port_security_enabled
set
to False, ports created with no security groups, etc.). I can try to do
that.




> [1] https://review.openstack.org/#/c/210321/
> [2] https://github.com/openstack/neutron/blob/
> a66c27193573ce015c6c1234b0f2a1d86fb85a22/neutron/plugins/
> ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1640
> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1406263
> [4] https://bugs.launchpad.net/neutron/+bug/1549443
> [5] https://review.openstack.org/#/c/421832/
> [6] https://github.com/openstack/neutron/blob/
> a66c27193573ce015c6c1234b0f2a1d86fb85a22/neutron/plugins/
> ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1521
>
> Thanks!
>
> --
> Bernard Cafarelli
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] [TC][Glance][Nova][TripleO][Heat][Mistral][Ironic][Murano] Glare

2017-01-19 Thread Renat Akhmerov

> On 19 Jan 2017, at 19:13, Mikhail Fedosin  wrote:
> 
> Plus, Swift does not provide data immutability. Where is the guarantee that 
> user won't change his files in Swift or completely remove them? Glare manages 
> this behavior and provides full immutability for stored data, regardless of 
> the backend. In general, to address these immutability issues Glance was 
> invented in due time. But now we see that its functionality is not enough and 
> it's really hard to extend it.


+1. Glare’s versioning capability seems very important to me in this regard.

Renat Akhmerov
@Nokia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-ansible] Ocata deployed on CentOS 7!

2017-01-19 Thread Sean M. Collins
That is great news! Congrats!

-- 
Sean M. Collins

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposing Jeremy Liu as core for cloudkitty

2017-01-19 Thread Maxime Cottret
 Hi, 

OK to add Jeremy as core dev for Cloudkitty

Regards,

--
Maxime
Cottret - Consultant Cloud/DataOps @ OBJECTIF-LIBRE

Mail :
maxime.cott...@objectif-libre.com
Tel : 05 82 95 65 36 (standard)
Web :
www.objectif-libre.com Twitter: @objectiflibre

Le 2017-01-19 14:18,
Christophe Sauthier a écrit : 

> Hello developers mailing list folks,
>

> I'd like to propose that we add Jeremy Liu (liujiong) as an OpenStack

> cloudkitty core reviewer.
> 
> He has been a member of our community
for many months, contributing 
> very seriously in cloudkitty and
cloudkitty-dashboard. He also provided 
> many reviews on both projects
part as you can se in his activity logs
> 
>
http://stackalytics.com/report/contribution/cloudkitty/60 [1]
>
http://stackalytics.com/report/contribution/cloudkitty-dashboard/60
[2]
> 
> His willing to help whenever it is need has been really
appreciated !
> 
> Current Cloudkitty cores, please respond with +1 or
explain your 
> opinion if voting against... If there are no objection
in the next 5 
> days I'll add him.
> 
> All the best,
> 
> Christophe
>

> 
> Christophe Sauthier Mail : 
>
christophe.sauth...@objectif-libre.com
> CEO Mob : +33 (0) 6 16 98 63
96
> Objectif Libre URL : www.objectif-libre.com [3]
> Au service de
votre Cloud Twitter : @objectiflibre
> 
> Suivez les actualités
OpenStack en français en vous abonnant à la Pause 
> OpenStack
>
http://olib.re/pause-openstack [4]
> 
>
__
>
OpenStack Development Mailing List (not for usage questions)
>
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [5]



Links:
--
[1]
http://stackalytics.com/report/contribution/cloudkitty/60
[2]
http://stackalytics.com/report/contribution/cloudkitty-dashboard/60
[3]
http://www.objectif-libre.com
[4] http://olib.re/pause-openstack
[5]
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [qa] [infra] Proposed new Cinder gate jobs

2017-01-19 Thread Erlon Cruz
Thanks for giving this review Michal.

On Thu, Jan 19, 2017 at 8:29 AM, Andreas Jaeger  wrote:

> On 2017-01-19 11:10, Michal Dulko wrote:
> > Hi all,
> >
> > I've seen some confusion around new Cinder CI jobs being proposed to
> > project-config in yesterday's IRC scrollback. This email aims to sum
> > this up and explain purposes of what's being proposed.
>
>
> Thanks a lot, Michal! That helps me seeing the big picture with all
> these changes,
>
> Andreas
>
> > [...]
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [openstack-ansible] Ocata deployed on CentOS 7!

2017-01-19 Thread Major Hayden
Hey folks,

Our multi-os work has paid off and I was able to wrap up a CentOS 7 deployment 
of OpenStack-Ansible's master branch yesterday. My environment only has four 
physical servers, so I deployed the basics:

  - keystone
  - nova
  - glance
  - neutron
  - heat
  - horizon
  - galera/rabbitmq/memcached/rsyslog

I did run into a few bugs and I'm working through those.  SELinux is currently 
in permissive mode[1], which isn't ideal.

There's more to come, but this is looking great so far.  The stability of 
CentOS 7 over Ubuntu 16.04 is certainly welcomed. ;)

[1] I'VE BEEN TROLLED THOROUGHLY ABOUT THIS ALREADY. SERIOUSLY. I'M WORKING ON 
IT! SHEESH!

--
Major Hayden

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] neutron-lib impact: providernet extension moved to neutron-lib

2017-01-19 Thread Boden Russell
A new version (1.1.0) of neutron-lib was recently released.
Among other things, this release rehomes the neutron providernet API
extension [1].

A consumption patch to use the rehomed code has been submitted to
neutron [2] and once merged will impact consumers who use providernet
constants from neutron.

While a patch for each affected project has been submitted [3], I only
plan to shepherd those patches in [3] that target stadium projects. For
all others (non-stadium) please encourage your team to help drive the
patch through review.

For more details on consuming neutron-lib, please see [4].


[1] https://review.openstack.org/418560/
[2] https://review.openstack.org/421562/
[3] https://review.openstack.org/#/q/topic:lib-providernet-apidef
[4]
https://github.com/openstack/neutron-lib/blob/master/doc/source/contributing.rst#phase-4-consume

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [watcher] PTG preparation

2017-01-19 Thread Antoine Cabot
Hello folks,

We already are one month away from the PTG event (mid-cycle) for Watcher.
I started a new etherpad [1] to prepare the agenda. Please complete
the topics list with all the items you want to discuss. Even if you
don't plan to be in Atlanta, I'd like to have all Watcher community
plans discussed.
We are in good track to release version 1.0 by the end of the Ocata
cycle thanks to all your contributions.

Hope to see you all in Atlanta.

Antoine (acabot)

[1] https://etherpad.openstack.org/p/pike-watcher-ptg

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Proposing Jeremy Liu as core for cloudkitty

2017-01-19 Thread Christophe Sauthier

Hello developers mailing list folks,

I'd like to propose that we add Jeremy Liu (liujiong) as an OpenStack 
cloudkitty core reviewer.


He has been a member of our community for many months, contributing 
very seriously in cloudkitty and cloudkitty-dashboard. He also provided 
many reviews on both projects part as you can se in his activity logs


http://stackalytics.com/report/contribution/cloudkitty/60
http://stackalytics.com/report/contribution/cloudkitty-dashboard/60

His willing to help whenever it is need has been really appreciated !

Current Cloudkitty cores, please respond with +1 or explain your 
opinion if voting against... If there are no objection in the next 5 
days I'll add him.


All the best,

Christophe


Christophe Sauthier   Mail : 
christophe.sauth...@objectif-libre.com

CEO   Mob : +33 (0) 6 16 98 63 96
Objectif LibreURL : www.objectif-libre.com
Au service de votre Cloud Twitter : @objectiflibre

Suivez les actualités OpenStack en français en vous abonnant à la Pause 
OpenStack

http://olib.re/pause-openstack


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [charms] monitoring interface

2017-01-19 Thread Liam Young
Hi Brad,

Thanks for looking into it. I think things should actually work out of the
box as they are now. So,

juju deploy nrpe nrpe-glance
juju deploy nrpe nrpe-cinder
juju deploy nagios
juju deploy glance
juju deploy cinder
juju add-relation nrpe-glance glance
juju add-relation nrpe-glance nagios
juju add-relation nrpe-cinder cinder
juju add-relation nrpe-cinder nagios

Should add nagios checks for glance and cinder to the juju deployed nagios.
(Taken from
https://wiki.ubuntu.com/OpenStack/OpenStackCharms/ReleaseNotes1504#Monitoring
).

Ideally we would rename the nrpe-external-master interface to local-monitor
(or add it as an additional interface) but that is not needed to get it up
and running.

Thanks
Liam

On 18 January 2017 at 16:07, Brad Marshall 
wrote:

> Hi all,
>
> We're looking at adding the monitor interface to the openstack charms to
> enable us to use the nagios charm, rather than via an external nagios
> using nrpe-external-master.
>
> I believe this will just be a matter of adding in the interface, adding
> an appropriate monitor.yaml that defines the checks, and updating
> charmhelpers.contrib.charmsupport.nrpe so that when it adds checks, it
> passes the appropriate information onto the relationship.
>
> Are there any concerns with this approach? Any suggestions on things to
> watch out for?  It does mean touching every charm, but I can't see any
> other way around it.
>
> Thanks,
> Brad
> --
> Brad Marshall
> Cloud Reliability Engineer
> Bootstack Squad, Canonical
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] [ptl] PTL candidacy for Pike

2017-01-19 Thread Emilien Macchi
This is my candidacy for PTL role in the TripleO team for the Pike cycle.

https://review.openstack.org/422587

Leading TripleO team during Ocata was tremendous.  Together, we have moved
forward in how we design, develop, deploy and use TripleO.
Though Ocata cycle was too short and I have the motivation on continuing my role
of PTL for one more cycle.

There are some areas where I would like to keep improving:
- Continue to make progress on release management.
  During Ocata, we had a stronger policy than before where we did our best to
  set and respect milestones, spec and feature freeze. I would like to
  continue progress on this during Pike so it becomes something normal
for the team.
- Keep pushing for more CI coverage and scenarios.
  We have been doing an incredible job by improving stabilization and coverage
  in our CI.  I would like to keep adding more services and more scenarios so we
  keep increasing the quality of TripleO.
- Facilitate the transition to tripleo-quickstart.
  We decided to use this tool during Ocata. In Pike, we'll switch our CI to use
  it.  We'll have to make it run smoothly without CI downtime.  It will require
  good communications and team-work.
- Scaling-up the team.
  We introduced the Squads during Ocata.  I would like to continue and see the
  Squads with more responsibility (Release management, bug triage etc).
  Also, I'll support the Squads to manage meetings on their topics as needed, as
  long it happens in the open.
  Finally, I think the Deep-Dive sessions are great and we need to keep
  leveraging them.  It helped newcomers and increases the share of knowledge
  within the team.
- Being a catalyst in OpenStack.
  TripleO might just be an installer, but it actually contributes to
make OpenStack
  better as it helps other projects such as Puppet OpenStack modules, Heat,
  Mistral, Zaqar and more.  Also because we have a strong CI, we can report
  quick and valuable feedback to projects (e.g. Nova, Neutron, etc) when
  something doesn't work fine outside devstack.
  I would like to make sure TripleO keeps this position in OpenStack,
  so TripleO is a reference when deploying OpenStack in production.


Thank you for your consideration,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TC][Glance][Nova][TripleO][Heat][Mistral][Ironic][Murano] Glare

2017-01-19 Thread Mikhail Fedosin
Glare does not compete with Swift, it uses this service as one of the
possible backeds. On the whole I should note that in some cases the use of
Swift is excessive: for example, for small files (a few kilobytes), it is
easier to store them directly in the database. And Glare just let you do
this - to keep large files in stores like Swift or Ceph, and use a more
appropriate location for small ones.

Plus, Swift does not provide data immutability. Where is the guarantee that
user won't change his files in Swift or completely remove them? Glare
manages this behavior and provides full immutability for stored data,
regardless of the backend. In general, to address these immutability issues
Glance was invented in due time. But now we see that its functionality is
not enough and it's really hard to extend it.

On Thu, Jan 19, 2017 at 6:25 AM, Lingxian Kong  wrote:

>
> On Thu, Jan 19, 2017 at 5:54 AM, Mikhail Fedosin 
> wrote:
>
>> And here I want to ask the community - how exactly Glare may be useful in
>> OpenStack? Glare was developed as a repository for all possible data types,
>> and it has many possible applications. For example, it's a storage of vm
>> images for Nova. Currently Glance is used for this, but Glare has much more
>> features and this transition is easy to implement. Then it's a storage of
>> Tosca templates. We were discussing integration with Heat and storing
>> templates and environments in Glare, also it may be interesting for TripleO
>> project. Mistral will store its workflows in Glare, it has already been
>> decided. I'm not sure if Murano project is still alive, but they already
>> use Glare 0.1 from Glance repo and it will be removed soon (in Pike afaik),
>> so they have no other options except to start using Glare v1. Finally there
>> were rumors about storing torrent files from Ironic.
>
>
> ​Seems Swift already could do such things.​
>
>
> Cheers,
> Lingxian Kong (Larry)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vote][kolla] deprecation for Debian distro support

2017-01-19 Thread Mauricio Lima
2

2017-01-19 8:50 GMT-03:00 Steven Dake (stdake) :

> My vote is for option 2 to deprecate Debian as there has been very little
> activity and operators seem uninterested in Debian as a platform.
>
> We could always add it back in at a later date if operators were to
> request it and the Debian team were interested in maintaining it.
>
> Regards
> -steve
>
>
> -Original Message-
> From: Christian Berendt 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Thursday, January 19, 2017 at 3:53 AM
> To: "openstack-dev@lists.openstack.org"  >
> Subject: [openstack-dev] [vote][kolla] deprecation for Debian distro
> support
>
> As discussed in one of the last team meetings I want to propose the
> deprecation (this cycle) and removal (next cycle) of the Debian support in
> Kolla.
>
> More than 1 week ago I sent a pre warning mail to the
> openstack-operators mailing list, without any reply [0].
>
> Kolla core reviewers, please vote now. The vote will be open for 7
> days (26.01.2017).
>
> 1. Kolla needs support for Debian, it should not be deprecated
>
> 2. Kolla should deprecate support for Debian
>
> [0] http://lists.openstack.org/pipermail/openstack-operators/
> 2017-January/012427.html
>
> --
> Christian Berendt
> Chief Executive Officer (CEO)
>
> Mail: bere...@betacloud-solutions.de
> Web: https://www.betacloud-solutions.de
>
> Betacloud Solutions GmbH
> Teckstrasse 62 / 70190 Stuttgart / Deutschland
>
> Geschäftsführer: Christian Berendt
> Unternehmenssitz: Stuttgart
> Amtsgericht: Stuttgart, HRB 756139
>
>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vote][kolla] deprecation for Debian distro support

2017-01-19 Thread Steven Dake (stdake)
My vote is for option 2 to deprecate Debian as there has been very little 
activity and operators seem uninterested in Debian as a platform.

We could always add it back in at a later date if operators were to request it 
and the Debian team were interested in maintaining it.

Regards
-steve


-Original Message-
From: Christian Berendt 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, January 19, 2017 at 3:53 AM
To: "openstack-dev@lists.openstack.org" 
Subject: [openstack-dev] [vote][kolla] deprecation for Debian distro support

As discussed in one of the last team meetings I want to propose the 
deprecation (this cycle) and removal (next cycle) of the Debian support in 
Kolla.

More than 1 week ago I sent a pre warning mail to the openstack-operators 
mailing list, without any reply [0].

Kolla core reviewers, please vote now. The vote will be open for 7 days 
(26.01.2017).

1. Kolla needs support for Debian, it should not be deprecated

2. Kolla should deprecate support for Debian

[0] 
http://lists.openstack.org/pipermail/openstack-operators/2017-January/012427.html

-- 
Christian Berendt
Chief Executive Officer (CEO)

Mail: bere...@betacloud-solutions.de
Web: https://www.betacloud-solutions.de

Betacloud Solutions GmbH
Teckstrasse 62 / 70190 Stuttgart / Deutschland

Geschäftsführer: Christian Berendt
Unternehmenssitz: Stuttgart
Amtsgericht: Stuttgart, HRB 756139


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [freezer] Freezer PTL Non-candidacy

2017-01-19 Thread Mathieu, Pierre-Arthur
Hello everyone,

As I already announced during our IRC meeting, I'm not planning to run for the
Pike PTL position.

Having held this role for two cycle now and I think it is time for someone else
to step in. I'll continue to contribute as a core-team member and will be
available to help the new PTL when needed.

I would like to say a BIG thanks to all the contributors as well as the core
team.
This year, we have seen the completion of big features as well as the addition
of many new contributors and a few new companies to the project. This makes me
very optimistic about Freezer's future.

This past year was a very rewarding experience for me and this is mainly thanks
to all of you.

Thanks,
Pierre
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [daisycloud-core] Agenda for IRC meeting 0800UTC Jan. 20 2017

2017-01-19 Thread hu.zhijiang
1) Roll Call

2) OPNFV: Daisy CI Progress

3) OPNFV: Daisy Support Escalator

4) OpenStack: Core Code Definition Wind Up

5) OpenStack: Choose Kolla Image Version To Deploy/Upgrade











B. R.,

Zhijiang__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vote][kolla] deprecation for Debian distro support

2017-01-19 Thread Eduardo Gonzalez
My vote for option 2 to deprecate Debian images.
Since the last deprecation vote nobody tried to maintain Debian images.
No gates, not tested and not maintained sound like good reasons to
deprecate Debian.

Regards

2017-01-19 10:53 GMT+00:00 Christian Berendt :

> As discussed in one of the last team meetings I want to propose the
> deprecation (this cycle) and removal (next cycle) of the Debian support in
> Kolla.
>
> More than 1 week ago I sent a pre warning mail to the openstack-operators
> mailing list, without any reply [0].
>
> Kolla core reviewers, please vote now. The vote will be open for 7 days
> (26.01.2017).
>
> 1. Kolla needs support for Debian, it should not be deprecated
>
> 2. Kolla should deprecate support for Debian
>
> [0] http://lists.openstack.org/pipermail/openstack-operators/
> 2017-January/012427.html
>
> --
> Christian Berendt
> Chief Executive Officer (CEO)
>
> Mail: bere...@betacloud-solutions.de
> Web: https://www.betacloud-solutions.de
>
> Betacloud Solutions GmbH
> Teckstrasse 62 / 70190 Stuttgart / Deutschland
>
> Geschäftsführer: Christian Berendt
> Unternehmenssitz: Stuttgart
> Amtsgericht: Stuttgart, HRB 756139
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [containers][magnum] Magnum team at Summit?

2017-01-19 Thread Thierry Carrez
Adrian Otto wrote:
>> On Jan 18, 2017, at 10:18 AM, Josh Berkus  wrote:
>> [...]
>> Is there going to be a magnum team meeting around OpenStack Summit in
>> Boston?
>>
>> I'm the community manager for Atomic Host, so if you're going to have
>> Magnum meetings, I'd like to send you some Atomic engineers to field any
>> questions/issues at the Summit.
> 
> Thanks for your question. We are planning to have our team design meetings at 
> the upcoming PTG event in Atlanta. We are not currently planning to have any 
> such meetings in Boston.

Quick remark: while the Magnum team won't have a dedicated "team
meeting" in Boston, we'll have a general community "Forum" in Boston in
which a Magnum packaging/requirements discussion would totally be
on-topic, especially if there are open issues.

The goal of the "Forum" is for the community to get together beyond
specific teams, discuss and share needs/requirements early enough to
influence what will get worked on in the next development cycle. Your
discussion topic certainly sounds like it would belong there, in case
you won't get it solved before.

More info on the Forum:
http://superuser.openstack.org/articles/openstack-forum/

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [QA] Announcing my candidacy for PTL of the Pike cycle

2017-01-19 Thread Ghanshyam Mann
Hi All,

First and foremost would like to wish you all a successful 2017 ahead and with 
this I'm announcing my PTL candidacy of the Quality Assurance team for the Pike 
release cycle.

I am glad to work in OpenStack community and would like to thank all the 
contributors who supported me to explore new things which brings out my best 
for the community.

Let me introduce myself, briefly. I have joined the OpenStack community 
development in 2014 during mid of Ice-House release. Currently, I'm 
contributing in QA projects and Nova as well as a core member in Tempest. Since 
Barcelona Summit, I volunteered  as mentor in the upstream training. It's 
always a great experience to introduce OpenStack upstream workflow to new 
contributors and encourage them.

Following are my contribution activities:
* Review:  
http://stackalytics.com/?release=all=marks_id=ghanshyammann
* Commit:  
http://stackalytics.com/?release=all=commits_id=ghanshyammann

I have worked on some key areas on QA like Interfaces migration to lib, JSON 
schema response validation(for compute), API Microversion testing framework in 
Tempest, Improve test coverage and Bug Triage etc.

QA program has been immensely improved since it was introduced which increased 
upstream development quality as well as helping production Cloud for their 
testing and stability. We have a lot of ideas from many different contributors 
to keep improving the QA which is phenomenal and I truly appreciate.

Moving forwards, following are my focus areas for Pike Cycle:

* Help the other Projects' developments and Plugin Improvement:
OpenStack projects consider the quality is important and QA team needs to 
provide useful testing framework for them. Projects who all needs to implement 
their tempest tests in plugin, focus will be to help plugin tests improvement 
and so projects quality. Lot of Tempest  interfaces are moving towards stable 
interfaces, existing plugin tests needs to be fixed multiple times. We are 
taking care of those and helping them to migrate smoothly. But there are still 
many  interfaces going to migrate to lib and further to be adopted on plugin 
side. I'd like to have some mechanism/automation to trigger plugins to know 
about change interfaces before it breaks them. Also help them to use the 
framework correctly. This helps the other non-core projects' tests.

* Improve QA projects for Production Cloud:
This will be the main focus area. Having QA projects more useful for Production 
Cloud testing is/will be great achievement for QA team. This area has been 
improved a lot since last couple of cycles and still a lot to do. We have to 
improve Production scenario testing coverage and make all QA projects easy to 
configure and use. During Barcelona summit, 2 new projects are initiated which 
will definitely help to achieve this goal.
  *RBAC Policy -  https://github.com/openstack/patrole
  *HA testing  -  https://review.openstack.org/#/c/374667/
  https://review.openstack.org/#/c/399618/
  *Hoping for more in future
There will be more focus on those projects and new ideas which will help 
production Cloud testing in more powerful way.

* JSON Schema *response* validation for projects:
JSON schema response validation for compute APIs has been very helpful to keep 
the APIs quality and compatibility. Currently many projects support 
microversion which provides a way to introduce the APIs changes in Backward 
compatible way. I'd like to concentrate on response schema validation for those 
projects also. This helps the OpenStack interoperability and the APIs 
compatibility.

* Improve Documentation and UX:
Documentation and UX are the key part for any software. There have been huge 
improvement in UX , documentation side and new Tempest workflow is available.  
Still configuration and usage is the pain point for Users. During summit/ptg or 
other platforms I'd like us to have more feedbacks from users and improve 
accordingly. Making configuration easy for people is one of the area we will be 
focusing on.

* Bring on more contributor and core reviewers:
QA projects have been one of the active projects during last couple of years 
and I'd like the team to mentor new contributors to help QA projects in planned 
goal and get them to a place where they will be ready for core reviewers.

* Migrate required Tempest Interfaces as stable to lib:
We together have done great job in this area which helped plugin tests. In 
Service clients migration, Object Storage service client are left and all 
others have been moved as stable interfaces. Lot of others framework/interface 
also available in lib. But still lot of unstable interfaces are being used in 
Plugins which should be migrated to lib soon. In Pike cycle, we will wind up 
all remaining service client migration and other required interfaces.

* Last but not the least, Openness is great power of Open Source and so does 
OpenStack. All new ideas from anyone 

[openstack-dev] [all][tc][swift][designate] New language addition process has been approved

2017-01-19 Thread Flavio Percoco

Greetings,

The Technical Committe recently approved a new reference document which
describes how new programming languages can be proposed for inclusion as
supported languages by OpenStack. The new reference document can be found
here[0].

I'd like to take this chance not only to share this new process - which in my
opinion is a good step forward for the entire community - but to invite other
teams to take a stab at it and move forward the inclusion of Go, which is the
last language that was discussed for inclusion in the community.

I hope folks will find this process useful and that it'll help innovating
OpenStack. I'm sure the process is not perfect and that we'll have to refine it
as we go so, let's get going.

Thanks everyone,
Flavio

[0] https://governance.openstack.org/tc/reference/new-language-requirements.html

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vote][kolla] deprecation for Debian distro support

2017-01-19 Thread Christian Berendt
As discussed in one of the last team meetings I want to propose the deprecation 
(this cycle) and removal (next cycle) of the Debian support in Kolla.

More than 1 week ago I sent a pre warning mail to the openstack-operators 
mailing list, without any reply [0].

Kolla core reviewers, please vote now. The vote will be open for 7 days 
(26.01.2017).

1. Kolla needs support for Debian, it should not be deprecated

2. Kolla should deprecate support for Debian

[0] 
http://lists.openstack.org/pipermail/openstack-operators/2017-January/012427.html

-- 
Christian Berendt
Chief Executive Officer (CEO)

Mail: bere...@betacloud-solutions.de
Web: https://www.betacloud-solutions.de

Betacloud Solutions GmbH
Teckstrasse 62 / 70190 Stuttgart / Deutschland

Geschäftsführer: Christian Berendt
Unternehmenssitz: Stuttgart
Amtsgericht: Stuttgart, HRB 756139


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [containers][magnum] Magnum team at Summit?

2017-01-19 Thread Ricardo Rocha
Hi.

It would be great to meet in any case.

We've been exploring Atomic system containers (as in 'atomic install
--system ...') for our internal plugins at CERN, and having some
issues with runc and selinux definitions plus some atomic command
bugs. It's mostly due to the config.json being a hard one to build
manually (or the config.json.template passed to Atomic), especially
after we've got used to the nice docker usability by now :) In any
case the atomic blog posts are incredibly useful, thanks for that!

To explain why we're trying this: we're running all our internal
plugins inside containers (this is for support of internal systems we
add to upstream Magnum). Running them in docker is problematic for two
reasons:
* they are visible to the users of the cluster (which is confusing,
and allows them to easily shoot themselves in the foot by killing
them)
* they cause a race condition when restarting docker if volumes were
previously created, as docker tries to make the volumes available
before launching any container

Having them managed by systemd and run directly in runc solves both of
the issues above. I understand docker 1.13 has a new plugin API which
might (or maybe not) help with this, but i haven't had time to try it
(all of the above is with docker 1.12).

Cheers,
  Ricardo

On Wed, Jan 18, 2017 at 7:18 PM, Josh Berkus  wrote:
> Magnum Devs:
>
> Is there going to be a magnum team meeting around OpenStack Summit in
> Boston?
>
> I'm the community manager for Atomic Host, so if you're going to have
> Magnum meetings, I'd like to send you some Atomic engineers to field any
> questions/issues at the Summit.
>
> --
> --
> Josh Berkus
> Project Atomic
> Red Hat OSAS
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [qa] [infra] Proposed new Cinder gate jobs

2017-01-19 Thread Andreas Jaeger
On 2017-01-19 11:10, Michal Dulko wrote:
> Hi all,
> 
> I've seen some confusion around new Cinder CI jobs being proposed to
> project-config in yesterday's IRC scrollback. This email aims to sum
> this up and explain purposes of what's being proposed.


Thanks a lot, Michal! That helps me seeing the big picture with all
these changes,

Andreas

> [...]
-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [validations][ui][tripleo] resetting state of validations

2017-01-19 Thread Tomas Sedovic

On 01/18/2017 10:14 PM, Dan Trainor wrote:

Hi -

Is there a way to reset the state of all the validations that have
previously ran, back to the original state they were prior to running?

Using the UI, for example, some validations (by design) run as soon as
you log in.  Others run after different actions are completed.  But
there's a state at which none of the validations have been ran, prior to
logging in to the UI.  I want to re-run those validations as if I had
logged in to the UI for the first time, for testing purposes.

Thanks!
-dant



(adding tripleo to the subject)

I don't believe there is a Mistral action that would let you do a reset 
like that, but a potential workaround would be to clone an existing 
plan. When you switch to the clone in the UI, it should be in the state 
you're asking for.


I don't have a tripleo env handy so I can't verify this will work, but I 
do seem to remember it behaving that way.


Tomas




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder] [qa] [infra] Proposed new Cinder gate jobs

2017-01-19 Thread Dulko, Michal
Hi all,

I've seen some confusion around new Cinder CI jobs being proposed to
project-config in yesterday's IRC scrollback. This email aims to sum
this up and explain purposes of what's being proposed.

Background
==

For a few releases we're aiming to increase our functional and
integration tests coverage. This was manifested by adding new Tempest
tests, enabling functional tests, providing CIs for open source volume
drivers and enabling multinode grenade testing of rolling upgrade.
We're continuing the efforts with various new jobs:

Multinode grenade
=

In Newton we've introduced a job that tests master c-api and c-sch with
stable c-vol and c-bak.

We would like to be able to test other combinations as well. Currently
Grenade doesn't support upgrading services on a node one-by-one, while
running tests in-between, so that's why we've decided to create
multiple jobs. This is being developed in [1].

I understand that two more multinode jobs put a lot of burden on gate's
resources and that's why we plan to keep these jobs in experimental
queue. We can fire them up on potentially breaking changes like RPC API
modifications and DB migrations.

Zero downtime
=

This was triggered by introduction of assert:supports-zero-downtime-
upgrade tag [2] and Cinder's implementation is being worked on in [3].
The exact testing solution is currently evaluated in Nova and Cinder's
implementation is following that. I think adding this job for Cinder is
future work - we'll let Nova team spearhead this.

Note that at first patch [3] was to introduce 3 more multinode jobs. I
don't think this will be necessary and we will require only a single
job. Anyway - that's future.

Volume migration


This is being worked on in [4] and is Cinder's equivalent of gate-
tempest-dsvm-multinode-live-migration-ubuntu-xenial in Nova.

Run in-tree tests
=

This effort aims to increase Cinder's community control over what
Tempest tests are run in Cinder jobs. It's gathered under run-intree-
tests topic [5].

ZeroMQ (merged)
===

This case is pretty simple, gate-tempest-dsvm-zeromq-multibackend-
ubuntu-xenial in experimental queue aims to test multibackend scenario
with ZeroMQ. Such scenario wasn't functional until [6] was merged. I
believe that we can pretty easily identify patches that can potentially
break ZeroMQ support, so this will stay in experimental for now and be
run only on demand.

I hope this helps to clear out some doubts. As you can see some of the
jobs with the highest demand for gate resources are intended to only
stay in experimental queue to be run by Cinder reviewers on demand.

[1] https://review.openstack.org/#/c/384836/
[2] 
https://governance.openstack.org/tc/reference/tags/assert_supports-zero-downtime-upgrade.html
[3] https://review.openstack.org/#/c/420375/
[4] https://review.openstack.org/#/c/381737
[5] https://review.openstack.org/#/q/topic:run-intree-tests
[6] https://review.openstack.org/#/c/398452/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >