Re: [openstack-dev] [nova][neutron]Nova cells v2+Neutron+Tricircle, it works

2017-06-06 Thread joehuang
Hello, Matt,

See inline comments with [joehuang]

Best Regards
Chaoyi Huang (joehuang)


From: Matt Riedemann [mriede...@gmail.com]
Sent: 07 June 2017 10:09
To: openstack-dev@lists.openstack.org; openstack-operat...@lists.openstack.org
Subject: Re: [openstack-dev] [nova][neutron]Nova cells v2+Neutron+Tricircle, it 
works

On 5/28/2017 6:58 PM, joehuang wrote:
> Hello,
>
> There is one session in OpenStack Boston summit about Neutron
> multi-site, and discuss how to make Neutron cells-aware[1].
>
> We have done experiment how Nova cells v2 + Neutron + Tricircle can work
> very well:

Thanks for testing this out Joe. It's great to get some feedback about
people experimenting with cells v2, especially this close to master - we
generally seem to have a 12-18 month latency on feedback for big things
like this so it's hard to react quickly to something a year old. :)

>
> Follow the guide[2], you will have one two cells environment: cell1 in
> node1 and cell2 in node2, and Nova-api/scheduler/conductor/placement in
> node1. To simplify the integration, the region for Nova-api is
> registered as CentralRegion in KeyStone.

This sounds sort of like what Dan Smith is doing in devstack for
upstream CI:

https://review.openstack.org/#/c/436094/

That's a 2-node job so we have to turn it into multi-cell with just
those two nodes, and one has to be running the controller services.

>
> At the same time, the tricircle devstack plugin will also install
> RegionOne Neutron in node1 to work with cell1, RegionTwo Neutron in
> node2, i,e, we'll have one local Neutron with one cell, the neutron
> endpoint url in cell's compute node will be configured to local Neutron.
> each local Neutron will be configured with Tricircle local Neutron plugin.
>
> We just mentioned that Nova-API is registered as CentralRegion, the
> tricircle devstack plugin will also start a central Neutron server and
> register it in CentralRegion( same region as Nova-API), the central
> Neutron server will be installed with Tricircle central Neutron plugin.
> In Nova-api's configuration nova.conf, the neutron endpoint url is
> configured to the central Neutron server's url.

If I'm understanding this correctly, nova-api will talk to Tricircle
which is federating the networking API for Neutron across the two
regions for "local" Neutron, right?

[joehuang] nova-api talk to central Neutron server, but not Tricircle. 
The central neutron server is installed with Tricircle central plugin

But what about nova-compute running within the cells? Are those talking
*only* to the Neutron deployed local to the same cell that nova-compute
is in? Or can each nova-compute talk to the central Neutron server?

[joehuang] nova-compute talking *only* to the Neutron deployed local to the
same cell that nova-compute is in


I think most of us in Nova have been considering external services as
global to Nova, so Neutron, Glance, Placement, Keystone and Cinder.

We don't support move operations, e.g. live migration, for server
instances across cells yet, but when we do we'd need to know the Neutron
topology to know which hosts we can send an instance to. This is sort of
touched on in the Neutron routed networks spec for Nova, which involves
using the Placement service for defining aggregate associations between
compute hosts and pools of network resources:

https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/neutron-routed-networks.html

[joehuang] routed network can address neutron scalability to some extent, but 
it just
leaves the tenant network isolation and management to the outside world. The 
complexity
is still there, but just outside Neutron/Nova, if we don't care about tenant's 
network
isolation, then it's ok. For tricircle support VxLAN, the overlay network can 
be extended to 
the host you want to migrate to. In routed network , You have to know the 
network topology, because
the network connectivity is defined outside Neutron, but not inside
Neutron, some host may be not connected to this network. 
So compared to routed network, Tricircle provides more flexible and tenant 
isolated networking capability
through Neutron.

>
> In both central Neutron server and local Neutron, the nova endpoint url
> configuration will be pointed to central Nova-api url, it's for
> call-back message to Nova-API.

Yup, so either Neutron sends an event to the central nova-api endpoint
which knows which cell an instance is in and can route the message
appropriately. I'm thinking of the os-server-external-events API in this
case.

>
> After the installation, now you have one CentralRegion with one Nova API
> and one Neutron Server, and regard to Nova multi-cells, each cell is
> accompanied with one local Neutron. ( It's not necessary for 1:1 mapping
> between cell and local Neutron, may multi-cells mapped to one local
> Neutron).
>
> If you create network/router without availability-zone specified, global
> network/router is 

Re: [openstack-dev] [nova] Fix an issue with resolving citations in nova-specs

2017-06-06 Thread Takashi Natsume

On 2017/06/05 22:19, Sean Dague wrote:

On 06/05/2017 01:06 AM, Takashi Natsume wrote:

But IMO, it is better to fix citations in specs (*4)
rather than capping the sphinx version.


Thank you for the patch, I just merged *4.


Thank you.

It is no longer necessary to cap the sphinx version in nova-specs.
So I submitted the following patch to uncap the sphinx version.

https://review.openstack.org/#/c/471153/

Regards,
Takashi Natsume
NTT Software Innovation Center
E-mail: natsume.taka...@lab.ntt.co.jp




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron]Nova cells v2+Neutron+Tricircle, it works

2017-06-06 Thread Matt Riedemann

On 5/28/2017 6:58 PM, joehuang wrote:

Hello,

There is one session in OpenStack Boston summit about Neutron 
multi-site, and discuss how to make Neutron cells-aware[1].


We have done experiment how Nova cells v2 + Neutron + Tricircle can work 
very well:


Thanks for testing this out Joe. It's great to get some feedback about 
people experimenting with cells v2, especially this close to master - we 
generally seem to have a 12-18 month latency on feedback for big things 
like this so it's hard to react quickly to something a year old. :)




Follow the guide[2], you will have one two cells environment: cell1 in 
node1 and cell2 in node2, and Nova-api/scheduler/conductor/placement in 
node1. To simplify the integration, the region for Nova-api is 
registered as CentralRegion in KeyStone.


This sounds sort of like what Dan Smith is doing in devstack for 
upstream CI:


https://review.openstack.org/#/c/436094/

That's a 2-node job so we have to turn it into multi-cell with just 
those two nodes, and one has to be running the controller services.




At the same time, the tricircle devstack plugin will also install 
RegionOne Neutron in node1 to work with cell1, RegionTwo Neutron in 
node2, i,e, we'll have one local Neutron with one cell, the neutron 
endpoint url in cell's compute node will be configured to local Neutron. 
each local Neutron will be configured with Tricircle local Neutron plugin.


We just mentioned that Nova-API is registered as CentralRegion, the 
tricircle devstack plugin will also start a central Neutron server and 
register it in CentralRegion( same region as Nova-API), the central 
Neutron server will be installed with Tricircle central Neutron plugin. 
In Nova-api's configuration nova.conf, the neutron endpoint url is 
configured to the central Neutron server's url.


If I'm understanding this correctly, nova-api will talk to Tricircle 
which is federating the networking API for Neutron across the two 
regions for "local" Neutron, right?


But what about nova-compute running within the cells? Are those talking 
*only* to the Neutron deployed local to the same cell that nova-compute 
is in? Or can each nova-compute talk to the central Neutron server?


I think most of us in Nova have been considering external services as 
global to Nova, so Neutron, Glance, Placement, Keystone and Cinder.


We don't support move operations, e.g. live migration, for server 
instances across cells yet, but when we do we'd need to know the Neutron 
topology to know which hosts we can send an instance to. This is sort of 
touched on in the Neutron routed networks spec for Nova, which involves 
using the Placement service for defining aggregate associations between 
compute hosts and pools of network resources:


https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/neutron-routed-networks.html



In both central Neutron server and local Neutron, the nova endpoint url 
configuration will be pointed to central Nova-api url, it's for 
call-back message to Nova-API.


Yup, so either Neutron sends an event to the central nova-api endpoint 
which knows which cell an instance is in and can route the message 
appropriately. I'm thinking of the os-server-external-events API in this 
case.




After the installation, now you have one CentralRegion with one Nova API 
and one Neutron Server, and regard to Nova multi-cells, each cell is 
accompanied with one local Neutron. ( It's not necessary for 1:1 mapping 
between cell and local Neutron, may multi-cells mapped to one local 
Neutron).


If you create network/router without availability-zone specified, global 
network/router is applied, all instances from any cell can be attached 
to. If you create network/router with availability-zone specified, you 
can get scoped network/router, i.e, the network/router can only be 
presented in regarding cells.


OK, this part about network AZs is probably related to what I was asking 
about above. I think Nova is pretty minimal when it comes to dealing 
with AZ and Neutron, i.e. when Nova creates a port it creates the port 
in the same AZ that the instance is in. I'm not sure what happens if 
that AZ does not exist in Neutron. I do know that managing AZs between 
Nova and Cinder has been a constant source of bugs and frustration:


http://lists.openstack.org/pipermail/openstack-operators/2017-May/013622.html



Note:
1. Because Nova multi-cells is still under development, there will be 
some issue in deployment and experience, some typical trouble shooting 
has been provided in the document[2].
2. For the RegionOne and RegionTwo name which is registered by local 
Neutron, you can change it to other better name if needed, for example 
"cell1-subregion", "cell2-subregion", etc.


Feedback and contribution are welcome to make this mode works.

[1]http://lists.openstack.org/pipermail/openstack-dev/2017-May/116614.html

Re: [openstack-dev] [tc] [all] TC Report 23

2017-06-06 Thread Matt Riedemann

On 6/6/2017 5:10 PM, Chris Dent wrote:

This week had a scheduled TC meeting for the express purpose of
discussing what to do about PostgreSQL. The remainder of this
document has notes from that meeting.


Just wanted to say thanks for the nice summary. I just got done writing 
up something similar and I'm happy to say they said the same things. 
Just wish I'd seen this earlier. :)


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][infra] Missing OpenSSL headers (fatal error: openssl/opensslv.h: No such file or directory)

2017-06-06 Thread Paul Belanger
Greetings,

If your project is seeing the following errors, this likely means you do not
have libssl development headers included in your bindep.txt file.

To fix this you can add the following to your local bindep.txt file:

  libssl-dev [platform:dpkg]
  openssl-devel [platform:rpm]

This will ensure centos-7 and ubuntu-xenial (and trusty) will properly set them
up for you.

This is a result of openstack-infra removing npm / nodejs as a build time
dependency for our DIB images.

If you have questions, please join us in #openstack-infra

-PB

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] [all] TC Report 23

2017-06-06 Thread Chris Dent


This week had a TC meeting. Notes from that are in the last
section below. Information about other pending or new changes that
may impact readers are in the earlier sections.

# New Things

The TC now has [office
hours](https://governance.openstack.org/tc/#office-hours) and a
dedicated IRC channel on freenode: `#openstack-tc`. Office hours will
be for an hour at 09:00 UTC on Tuesdays, 01:00 UTC on Wednesdays,
and 15:00 UTC on Thursdays. The idea is that some segment of the
TC membership will be available during at least these times for
unstructured discussion with whomever from the community wants to
join in.

`etcd` has been approved as a [base
service](https://governance.openstack.org/tc/reference/base-services.html).
Emilien has posted an
[update](http://lists.openstack.org/pipermail/openstack-dev/2017-June/117943.html)
on using etcd with configuration management.

# Pending Stuff

## Queens Community Goals

[Last week's report](https://anticdent.org/tc-report-22.html) has a
bunch of links on community goals for Pike. There are enough of them
that we'll have to pick and choose amongst them. A new one proposes
[policy and docs in code](https://review.openstack.org/#/c/469954/).
The document has a long list of benefits of doing this. A big one is
that you can end up with a much smaller policy file. Even one that
doesn't exist at all if you choose to use the defaults.

The [email version of the
report](http://lists.openstack.org/pipermail/openstack-dev/2017-May/117655.html)
spawned a series of subthreads on the efficacy and relative fairness
of how we deal with plugins in tempest. It's unclear yet if there is
any actionable followup from that. One option might be to propose an
adjustment to the original [split plugins
goal](https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html)
to see how much or little support there is for the idea of all tests
being managed via plugins.

## Managing Binary Artifacts

With the increased interest in containers and integrating and
interacting with other communities (where the norms and requirements
for useful releases are sometimes different from those established
in OpenStack) some clarity was required on the constraints a project
must satisfy to do binary releases. [Guidelines for managing
releases of binary
artifacts](https://review.openstack.org/#/c/469265/) have been
proposed. They are not particularly onerous but the details deserve
wide review to make sure we're not painting ourselves into any
corners or introducing unnecessary risks.

# Meeting Stuff

This week had a scheduled TC meeting for the express purpose of
discussing what to do about PostgreSQL. The remainder of this
document has notes from that meeting.

There are several different concerns related to PostgreSQL, but the
most pressing one is that much of the OpenStack documentation makes
it appear that the volume of testing and other attention that MySQL
receives is also applied to PostgreSQL. This is not the case (for a
variety of reasons). There has been general agreement that something
must be done, at least presenting warnings in the documentation.

A [first proposal](https://review.openstack.org/#/c/427880/) was created
which provides a good deal of historical context and laid out some
steps for making the current state of PostgreSQL in OpenStack more
clear. After some disagreement over the extent and reach of the document
I created a [second proposal](https://review.openstack.org/#/c/465589/)
that tried to say less in general but specifically less about
MySQL and tried to point to a future where PostgreSQL could be a
legitimate option (in part because there are real people out there using
it, in part because two implementations is healthy in many ways).

This drew out a lot of discussion, including some about [the
philosophy of how we manage
databases](http://lists.openstack.org/pipermail/openstack-dev/2017-May/117148.html),
but much of it only identified fairly tightly held differences and
did not move us towards helping real people.

After some fatigue it was decided to have this meeting whereupon we
decided (taking the entire hour [to talk about
it](http://eavesdrop.openstack.org/meetings/tc/2017/tc.2017-06-06-20.00.log.html))
that there were two things we could agree to, one we could not, and
a need to have a discussion about some of the philosophical concerns
at some other time.

The things we can agree about are:

* The OpenStack documentation needs to indicate the lower attention
  that PostgreSQL currently receives from the upstream community.

* We need better insight of usage of OpenStack by people who are
  "behind" a vendor and may not care about or know about the user
  survey and thus we need to work with the foundation board to
  improve that situation.

Where we don't agree is whether the resolution being proposed needs
to include information about work being done to evaluate where on a
scale of "no big deal" to "really hard" a transition to MySQL from

Re: [openstack-dev] [Openstack-operators] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-06-06 Thread Erik McCormick
On Tue, Jun 6, 2017 at 4:44 PM, Lance Bragstad  wrote:
>
>
> On Tue, Jun 6, 2017 at 3:06 PM, Marc Heckmann 
> wrote:
>>
>> Hi,
>>
>> On Tue, 2017-06-06 at 10:09 -0500, Lance Bragstad wrote:
>>
>> Also, with all the people involved with this thread, I'm curious what the
>> best way is to get consensus. If I've tallied the responses properly, we
>> have 5 in favor of option #2 and 1 in favor of option #3. This week is spec
>> freeze for keystone, so I see a slim chance of this getting committed to
>> Pike [0]. If we do have spare cycles across the team we could start working
>> on an early version and get eyes on it. If we straighten out everyone
>> concerns early we could land option #2 early in Queens.
>>
>>
>> I was the only one in favour of option 3 only because I've spent a bunch
>> of time playing with option #1 in the past. As I mentioned previously in the
>> thread, if #2 is more in line with where the project is going, then I'm all
>> for it. At this point, the admin scope issue has been around long enough
>> that Queens doesn't seem that far off.
>
>
> From an administrative point-of-view, would you consider option #1 or option
> #2 to better long term?
>

Count me as another +1 for option 2. It's the right way to go long
term, and we've lived with how it is now long enough that I'm OK
waiting a release or even 2 more for it with things as is. I think
option 3 would just muddy the waters.

-Erik

>>
>>
>> -m
>>
>>
>> I guess it comes down to how fast folks want it.
>>
>> [0] https://review.openstack.org/#/c/464763/
>>
>> On Tue, Jun 6, 2017 at 10:01 AM, Lance Bragstad 
>> wrote:
>>
>> I replied to John, but directly. I'm sending the responses I sent to him
>> but with the intended audience on the thread. Sorry for not catching that
>> earlier.
>>
>>
>> On Fri, May 26, 2017 at 2:44 AM, John Garbutt 
>> wrote:
>>
>> +1 on not forcing Operators to transition to something new twice, even if
>> we did go for option 3.
>>
>>
>> The more I think about this, the more it worries me from a developer
>> perspective. If we ended up going with option 3, then we'd be supporting
>> both methods of elevating privileges. That means two paths for doing the
>> same thing in keystone. It also means oslo.context, keystonemiddleware, or
>> any other library consuming tokens that needs to understand elevated
>> privileges needs to understand both approaches.
>>
>>
>>
>> Do we have an agreed non-distruptive upgrade path mapped out yet? (For any
>> of the options) We spoke about fallback rules you pass but with a warning to
>> give us a smoother transition. I think that's my main objection with the
>> existing patches, having to tell all admins to get their token for a
>> different project, and give them roles in that project, all before being
>> able to upgrade.
>>
>>
>> Thanks for bringing up the upgrade case! You've kinda described an upgrade
>> for option 1. This is what I was thinking for option 2:
>>
>> - deployment upgrades to a release that supports global role assignments
>> - operator creates a set of global roles (i.e. global_admin)
>> - operator grants global roles to various people that need it (i.e. all
>> admins)
>> - operator informs admins to create globally scoped tokens
>> - operator rolls out necessary policy changes
>>
>> If I'm thinking about this properly, nothing would change at the
>> project-scope level for existing users (who don't need a global role
>> assignment). I'm hoping someone can help firm ^ that up or improve it if
>> needed.
>>
>>
>>
>> Thanks,
>> johnthetubaguy
>>
>> On Fri, 26 May 2017 at 08:09, Belmiro Moreira
>>  wrote:
>>
>> Hi,
>> thanks for bringing this into discussion in the Operators list.
>>
>> Option 1 and 2 and not complementary but complety different.
>> So, considering "Option 2" and the goal to target it for Queens I would
>> prefer not going into a migration path in
>> Pike and then again in Queens.
>>
>> Belmiro
>>
>> On Fri, May 26, 2017 at 2:52 AM, joehuang  wrote:
>>
>> I think a option 2 is better.
>>
>> Best Regards
>> Chaoyi Huang (joehuang)
>> 
>> From: Lance Bragstad [lbrags...@gmail.com]
>> Sent: 25 May 2017 3:47
>> To: OpenStack Development Mailing List (not for usage questions);
>> openstack-operat...@lists.openstack.org
>> Subject: Re: [openstack-dev]
>> [keystone][nova][cinder][glance][neutron][horizon][policy] defining
>> admin-ness
>>
>> I'd like to fill in a little more context here. I see three options with
>> the current two proposals.
>>
>> Option 1
>>
>> Use a special admin project to denote elevated privileges. For those
>> unfamiliar with the approach, it would rely on every deployment having an
>> "admin" project defined in configuration [0].
>>
>> How it works:
>>
>> Role assignments on this project represent global scope which is denoted
>> by a boolean attribute 

Re: [openstack-dev] [Openstack-operators] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-06-06 Thread Lance Bragstad
On Tue, Jun 6, 2017 at 3:06 PM, Marc Heckmann 
wrote:

> Hi,
>
> On Tue, 2017-06-06 at 10:09 -0500, Lance Bragstad wrote:
>
> Also, with all the people involved with this thread, I'm curious what the
> best way is to get consensus. If I've tallied the responses properly, we
> have 5 in favor of option #2 and 1 in favor of option #3. This week is spec
> freeze for keystone, so I see a slim chance of this getting committed to
> Pike [0]. If we do have spare cycles across the team we could start working
> on an early version and get eyes on it. If we straighten out everyone
> concerns early we could land option #2 early in Queens.
>
>
> I was the only one in favour of option 3 only because I've spent a bunch
> of time playing with option #1 in the past. As I mentioned previously in
> the thread, if #2 is more in line with where the project is going, then I'm
> all for it. At this point, the admin scope issue has been around long
> enough that Queens doesn't seem that far off.
>

>From an administrative point-of-view, would you consider option #1 or
option #2 to better long term?


>
> -m
>
>
> I guess it comes down to how fast folks want it.
>
> [0] https://review.openstack.org/#/c/464763/
>
> On Tue, Jun 6, 2017 at 10:01 AM, Lance Bragstad 
> wrote:
>
> I replied to John, but directly. I'm sending the responses I sent to him
> but with the intended audience on the thread. Sorry for not catching that
> earlier.
>
>
> On Fri, May 26, 2017 at 2:44 AM, John Garbutt 
> wrote:
>
> +1 on not forcing Operators to transition to something new twice, even if
> we did go for option 3.
>
>
> The more I think about this, the more it worries me from a developer
> perspective. If we ended up going with option 3, then we'd be supporting
> both methods of elevating privileges. That means two paths for doing the
> same thing in keystone. It also means oslo.context, keystonemiddleware, or
> any other library consuming tokens that needs to understand elevated
> privileges needs to understand both approaches.
>
>
>
> Do we have an agreed non-distruptive upgrade path mapped out yet? (For any
> of the options) We spoke about fallback rules you pass but with a warning
> to give us a smoother transition. I think that's my main objection with the
> existing patches, having to tell all admins to get their token for a
> different project, and give them roles in that project, all before being
> able to upgrade.
>
>
> Thanks for bringing up the upgrade case! You've kinda described an upgrade
> for option 1. This is what I was thinking for option 2:
>
> - deployment upgrades to a release that supports global role assignments
> - operator creates a set of global roles (i.e. global_admin)
> - operator grants global roles to various people that need it (i.e. all
> admins)
> - operator informs admins to create globally scoped tokens
> - operator rolls out necessary policy changes
>
> If I'm thinking about this properly, nothing would change at the
> project-scope level for existing users (who don't need a global role
> assignment). I'm hoping someone can help firm ^ that up or improve it if
> needed.
>
>
>
> Thanks,
> johnthetubaguy
>
> On Fri, 26 May 2017 at 08:09, Belmiro Moreira <
> moreira.belmiro.email.li...@gmail.com> wrote:
>
> Hi,
> thanks for bringing this into discussion in the Operators list.
>
> Option 1 and 2 and not complementary but complety different.
> So, considering "Option 2" and the goal to target it for Queens I would
> prefer not going into a migration path in
> Pike and then again in Queens.
>
> Belmiro
>
> On Fri, May 26, 2017 at 2:52 AM, joehuang  wrote:
>
> I think a option 2 is better.
>
> Best Regards
> Chaoyi Huang (joehuang)
> --
> *From:* Lance Bragstad [lbrags...@gmail.com]
> *Sent:* 25 May 2017 3:47
> *To:* OpenStack Development Mailing List (not for usage questions);
> openstack-operat...@lists.openstack.org
> *Subject:* Re: [openstack-dev] 
> [keystone][nova][cinder][glance][neutron][horizon][policy]
> defining admin-ness
>
> I'd like to fill in a little more context here. I see three options with
> the current two proposals.
>
> *Option 1*
>
> Use a special admin project to denote elevated privileges. For those
> unfamiliar with the approach, it would rely on every deployment having an
> "admin" project defined in configuration [0].
>
> *How it works:*
>
> Role assignments on this project represent global scope which is denoted
> by a boolean attribute in the token response. A user with an 'admin' role
> assignment on this project is equivalent to the global or cloud
> administrator. Ideally, if a user has a 'reader' role assignment on the
> admin project, they could have access to list everything within the
> deployment, pending all the proper changes are made across the various
> services. The workflow requires a special project for any sort of elevated
> privilege.
>
> Pros:
> - Almost 

[openstack-dev] [nova][out-of-tree drivers] InstanceInfo/get_info getting a haircut

2017-06-06 Thread Eric Fried
If you don't maintain an out-of-tree nova compute driver, you can
probably hit Delete now.

A proposed change [1] gets rid of some unused fields from
nova.virt.hardware.InstanceInfo, which is the thing returned by
ComputeDriver.get_info().

I say "unused" in the context of the nova project.  If you have a
derived project that's affected by this, feel free to respond or reach
out to me (efried) on #openstack-nova to discuss.

This change is planned for Pike only.

[1] https://review.openstack.org/#/c/471146/

Thanks,
Eric
.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [upgrades][skip-level][leapfrog] - RFC - Skipping releases when upgrading

2017-06-06 Thread Marios Andreou
On Fri, May 26, 2017 at 4:55 AM, Carter, Kevin  wrote:
>
> Hello Stackers,
>
>
Hi Kevin, all,

apologies for the very late response here - fwiw I was working at a remote
location all of last week and am catching up still. I was not at the PTG or
part of the original conversation but this thread && etherpad have been
very helpful so thank you very much for sharing. Mostly replying to say
'this is something TripleO/upgrades are interested in too' - obviously not
for the P cycle - and some thoughts on how TripleO is doing upgrades today.

Big +1 to David Simard's point about 'Making N to N+1 upgrades seamless and
work well is already challenging
today ' - ++ to that from our experience. Besides anything else, going
between versions we've also had to change the workflow itself (docs @ [0]
include a link to the composable services spec that explains why the
workflow had to change for Newton to Ocata upgrades). The point is we are
very much still working towards a seamless upgrades experience - we *are*
improving on each release most notably N..O - considering more pre-upgrade
validations for example and trying to minimize service downtime. Having
said that some more comments inline to the goal of skipping upgrades:


> As I'm sure many of you know there was a talk about doing "skip-level"[0]
> upgrades at the OpenStack Summit which quite a few folks were interested
> in. Today many of the interested parties got together and talked about
> doing more of this in a formalized capacity. Essentially we're looking for
> cloud upgrades with the possibility of skipping releases, ideally enabling
> an N+3 upgrade. In our opinion it would go a very long way to solving cloud
> consumer and deployer problems it folks didn't have to deal with an upgrade
> every six months. While we talked about various issues and some of the
> current approaches being kicked around we wanted to field our general chat
> to the rest of the community and request input from folks that may have
> already fought such a beast. If you've taken on an adventure like this how
> did you approach it? Did it work? Any known issues, gotchas, or things
> folks should be generally aware of?
>
>
> During our chat today we generally landed on an in-place upgrade with
> known API service downtime and little (at least as little as possible) data
> plane downtime. The process discussed was basically:
> a1. Create utility "thing-a-me" (container, venv, etc) which contains the
> required code to run a service through all of the required upgrades.
> a2. Stop service(s).
> a3. Run migration(s)/upgrade(s) for all releases using the utility
> "thing-a-me".
> a4. Repeat for all services.
>
> b1. Once all required migrations are complete run a deployment using the
> target release.
> b2. Ensure all services are restarted.
> b3. Ensure cloud is functional.
> b4. profit!
>
> Obviously, there's a lot of hand waving here but such a process is being
> developed by the OpenStack-Ansible project[1]. Currently, the OSA tooling
> will allow deployers to upgrade from Juno/Kilo to Newton using Ubuntu
> 14.04. While this has worked in the lab, it's early in development (YMMV).
> Also, the tooling is not very general purpose or portable outside of OSA
> but it could serve as a guide or just a general talking point. Are there
> other tools out there that solve for the multi-release upgrade? Are there
> any folks that might want to share their expertise? Maybe a process outline
> that worked? Best practices? Do folks believe tools are the right way to
> solve this or would comprehensive upgrade documentation be better for the
> general community?
>
>
What about packages - what repos will we set up on these nodes ... will
they jump directly from current version to latest of target e.g. N+2? Is
that possible - I mean we may have to consider any version specific
packaging tasks. In TripleO we are actually using ansible tasks defined per
service manifest e.g. neutron l3 agent @ [1] to stop all the things and
then we rely on puppet (puppet-tripleo and service specific puppet modules)
to update packages, run dbase migrations e.g. [2] and start all the things
again (the exception to this general rule of ansible down/puppet up is some
core services, which we want to recover immediately rather than wait for
puppet run, like at [3] for example rabbit).

I am not by any stretch expert on the dbase migrations so I leave that
discussion to more qualified folks but just from a general scaling point of
view trying to maintain a single repo for all the migration things for all
services doesn't work so +1 to the others here advocating the migrations
live with the service and should be compiled/applied by tooling at run time
- whether it is a container thing-a-me or puppet/whatever. For TripleO you
could even override the puppet PostDeploy steps and run Ansible tasks
instead if that accomplished what you needed for the upgrades in your
service list. In fact the TripleO Ocata to 

Re: [openstack-dev] [tripleo] Deprecated Parameters Warning

2017-06-06 Thread Emilien Macchi
On Tue, Jun 6, 2017 at 6:53 AM, Saravanan KR  wrote:
> Hello,
>
> I am working on a patch [1] to list the deprecated parameters of the
> current plan. It depends on a heat patch[2] which provides
> parameter_group support for nested stacks. The change is to add a new
> workflow to analyze the plan templates and find out the list of
> deprecated parameters, identified by parameter_groups with label
> "deprecated".
>
> This workflow can be used by CLI and UI to provide a warning to the
> user about the deprecated parameters. This is only the listing,
> changes are required in tripleoclient to invoke and and provide
> warning. I am sending this mail to update the group, to bring
> awareness on the parameter deprecation.

I find this feature very helpful, specially with all the THT
parameters that we have and that are moving quite fast over the
cycles.
Thanks for working on it!

> Regards,
> Saravanan KR
>
> [1] https://review.openstack.org/#/c/463949/
> [2] https://review.openstack.org/#/c/463941/
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [MassivelyDistributed] IRC Meeting tomorrow 15:00 UTC

2017-06-06 Thread lebre . adrien
Dear all, 

A gentle reminder for our meeting tomorrow. 
As usual, the agenda is available at: 
https://etherpad.openstack.org/p/massively_distributed_ircmeetings_2017 (line 
675)
Please feel free to add items.

Best, 
ad_rien_

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-06 Thread Emilien Macchi
Following-up the session that we had in Boston:
https://etherpad.openstack.org/p/BOS-forum-future-of-configuration-management

Here's an update on where we are and what is being done.


== Machine Readable Sample Config

Ben's spec has been merged: https://review.openstack.org/#/c/440835/
And also the code which implements it: https://review.openstack.org/#/c/451081/
He's now working on the documentation on how to use the feature.

$ oslo-config-generator --namespace keystone --formal yaml > keystone.yaml

Here's an example of the output for Keystone config: https://clbin.com/EAfFM
This feature was asked at the PTG, and it's already done!


== Pluggable drivers for oslo.config

Doug's spec has been well written and the feedback from Summit and the
review was taken in account: https://review.openstack.org/#/c/454897/
It's now paused because we think we could use confd (with etcd driver)
to generate configuration files.

Imagine the work done by Ben in Machine Readable Sample Config, that
would allow us to generate Confd templates for all services (Keystone,
Nova, etc) via a tool provided in oslo.config with all the options
available for a namespace.
We could have packaging builds (e.g. RDO distgit) using the tooling
when building packages so we could ship confd templates in addition of
ini configuration files.
When services would start (e.g. in containers), confd would generate
configuration files from the templates that is part of the container,
and read the values from etcd.

The benefit of doing this, is that a very little work is required in
oslo.config to make this happen (only a tool to generate confd
templates). It could be a first iteration.
Another benefit is that confd will generate a configuration file when
the application will start. So if etcd is down *after* the app
startup, it shouldn't break the service restart if we don't ask confd
to re-generate the config. It's good for operators who were concerned
about the fact the infrastructure would rely on etcd. In that case, we
would only need etcd at the initial deployment (and during lifecycle
actions like upgrades, etc).

The downside is that in the case of containers, they would still have
a configuration file within the container, and the whole goal of this
feature was to externalize configuration data and stop having
configuration files.


== What's next

I see 2 short-term actions that we can work on:

1) Decide if whether or not confd solution would be acceptable for a
start. I'm asking Kolla, TripleO, Helm, Ansible projects if they would
be willing to use this feature. I'm also asking operators to give
feedback on it.

2) Investigate how to expose parameters generated by Ben's work on
Machine Readable Sample Config to the users (without having to
manually maintain all options) - I think this has to be solved on the
installers side, but I might be wrong; and also investigate how to
populate parameters data into etcd. This tool could be provided by
oslo.config probably.



Any feedback from folks working on installers or from operators would
be more than welcome!

Thanks,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [reno] we need help reviewing patches to reno

2017-06-06 Thread Vikash Kumar
Hi Doug,

 Will participate in reno reviews.

Regards,
Vikash

*"Without requirements or design, programming is the art of adding bugs to
an empty text file."- Louis Srygley*

On Tue, Jun 6, 2017 at 8:41 PM, Julien Danjou  wrote:

> On Tue, Jun 06 2017, Doug Hellmann wrote:
>
> > I am looking for one or two people interested in learning about how reno
> > works to help with reviews. If you like graph traversal algorithms
> > and/or text processing, have a look at the code in the openstack/reno
> > repository and let me know if you're interested in helping out.
>
> I've added it to my list of watched projects. I'll try to review patches
> as they come in. I use reno and I like it. :)
>
> --
> Julien Danjou
> ;; Free Software hacker
> ;; https://julien.danjou.info
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [cinder] [neutron] - Global Request ID progress

2017-06-06 Thread Armando M.
On 6 June 2017 at 04:49, Sean Dague  wrote:

> Some good progress has been made so far on Global Request ID work in the
> core IaaS layer, here is where we stand.
>
> STATUS
>
> oslo.context / oslo.middleware - everything DONE
>
> devstack logging additional global_request_id - DONE
>
> cinder:
> - client supports global_request_id - DONE
> - Cinder calls Nova with global_request_id - TODO (waiting on Novaclient
> release)
> - Cinder calls Glance with global_request_id - TODO
>
> neutron:
> - client supports global_request_id - IN PROGRESS (this landed,
> released, but the neutron client release had to be blocked for unrelated
> issues).
>

The ban has been reverted [1] so this should be DONE (again)

[1] https://review.openstack.org/#/c/470047/


> - Neutron calls Nova with global_request_id - TODO (waiting on
> Novaclient release)
>
> nova:
> - Convert to oslo.middleware (to accept global_request_id) - DONE
> - client supports global_request_id - IN PROGRESS (waiting for release
> here - https://review.openstack.org/#/c/471323/)
> - Nova calls cinder with global_request_id - DONE
> - Nova calls neutron with global_request_id - TODO (waiting on working
> neutronclient release)
> - Nova calls Glance with global request id - IN PROGRESS (review needs
> final +2 here https://review.openstack.org/#/c/467242/)
>
> glance:
> - client supports global_request_id - DONE
> - Glance supports setting global_request_id - IN REVIEW
> (https://review.openstack.org/#/c/468443/) *(some debate on this).
>
>
> Everything except the last glance change is uncontroversial, and it's
> just mechanics and project management to get things through in the
> correct order.
>
>
> The Glance support for global_request_id has hit a bump in the review
> process as there is a concern that it's changing the API. Though from an
> end user perspective that's not happening, it's just changing which
> field things get logged into. We'll see if we can work through that.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] KVM Forum 2017: Call For Participation

2017-06-06 Thread Daniel P. Berrange
A quick reminder that the deadline for submissions to the KVM Forum
2017 is just 10 days away now, June 15.

On Tue, May 09, 2017 at 01:50:52PM +0100, Daniel P. Berrange wrote:
> 
> KVM Forum 2017: Call For Participation
> October 25-27, 2017 - Hilton Prague - Prague, Czech Republic
> 
> (All submissions must be received before midnight June 15, 2017)
> =
> 
> KVM Forum is an annual event that presents a rare opportunity
> for developers and users to meet, discuss the state of Linux
> virtualization technology, and plan for the challenges ahead. 
> We invite you to lead part of the discussion by submitting a speaking
> proposal for KVM Forum 2017.
> 
> At this highly technical conference, developers driving innovation
> in the KVM virtualization stack (Linux, KVM, QEMU, libvirt) can
> meet users who depend on KVM as part of their offerings, or to
> power their data centers and clouds.
> 
> KVM Forum will include sessions on the state of the KVM
> virtualization stack, planning for the future, and many
> opportunities for attendees to collaborate. As we celebrate ten years
> of KVM development in the Linux kernel, KVM continues to be a
> critical part of the FOSS cloud infrastructure.
> 
> This year, KVM Forum is joining Open Source Summit in Prague, 
> Czech Republic. Selected talks from KVM Forum will be presented on
> Wednesday October 25 to the full audience of the Open Source Summit.
> Also, attendees of KVM Forum will have access to all of the talks from
> Open Source Summit on Wednesday.
> 
> http://events.linuxfoundation.org/cfp
> 
> Suggested topics:
> * Scaling, latency optimizations, performance tuning, real-time guests
> * Hardening and security
> * New features
> * Testing
> 
> KVM and the Linux kernel:
> * Nested virtualization
> * Resource management (CPU, I/O, memory) and scheduling
> * VFIO: IOMMU, SR-IOV, virtual GPU, etc.
> * Networking: Open vSwitch, XDP, etc.
> * virtio and vhost
> * Architecture ports and new processor features
> 
> QEMU:
> * Management interfaces: QOM and QMP
> * New devices, new boards, new architectures
> * Graphics, desktop virtualization and virtual GPU
> * New storage features
> * High availability, live migration and fault tolerance
> * Emulation and TCG
> * Firmware: ACPI, UEFI, coreboot, U-Boot, etc.
> 
> Management and infrastructure
> * Managing KVM: Libvirt, OpenStack, oVirt, etc.
> * Storage: Ceph, Gluster, SPDK, etc.r
> * Network Function Virtualization: DPDK, OPNFV, OVN, etc.
> * Provisioning
> 
> 
> ===
> SUBMITTING YOUR PROPOSAL
> ===
> Abstracts due: June 15, 2017
> 
> Please submit a short abstract (~150 words) describing your presentation
> proposal. Slots vary in length up to 45 minutes. Also include the proposal
> type -- one of:
> - technical talk
> - end-user talk
> 
> Submit your proposal here:
> http://events.linuxfoundation.org/cfp
> Please only use the categories "presentation" and "panel discussion"
> 
> You will receive a notification whether or not your presentation proposal
> was accepted by August 10, 2017.
> 
> Speakers will receive a complimentary pass for the event. In the instance
> that case your submission has multiple presenters, only the primary speaker 
> for a
> proposal will receive a complimentary event pass. For panel discussions, all
> panelists will receive a complimentary event pass.
> 
> TECHNICAL TALKS
> 
> A good technical talk should not just report on what has happened over
> the last year; it should present a concrete problem and how it impacts
> the user and/or developer community. Whenever applicable, focus on
> work that needs to be done, difficulties that haven't yet been solved,
> and on decisions that other developers should be aware of. Summarizing
> recent developments is okay but it should not be more than a small
> portion of the overall talk.
> 
> END-USER TALKS
> 
> One of the big challenges as developers is to know what, where and how
> people actually use our software. We will reserve a few slots for end
> users talking about their deployment challenges and achievements.
> 
> If you are using KVM in production you are encouraged submit a speaking
> proposal. Simply mark it as an end-user talk. As an end user, this is a
> unique opportunity to get your input to developers.
> 
> HANDS-ON / BOF SESSIONS
> 
> We will reserve some time for people to get together and discuss
> strategic decisions as well as other topics that are best solved within
> smaller groups.
> 
> These sessions will be announced during the event. If you are interested
> in organizing such a session, please add it to the list at
> 
>   http://www.linux-kvm.org/page/KVM_Forum_2017_BOF
> 
> Let people you think who might be interested know about your BOF, and 
> encourage
> them to add their names to the wiki page as well. Please try to
> add your ideas to the list before KVM Forum 

Re: [openstack-dev] [reno] we need help reviewing patches to reno

2017-06-06 Thread Julien Danjou
On Tue, Jun 06 2017, Doug Hellmann wrote:

> I am looking for one or two people interested in learning about how reno
> works to help with reviews. If you like graph traversal algorithms
> and/or text processing, have a look at the code in the openstack/reno
> repository and let me know if you're interested in helping out.

I've added it to my list of watched projects. I'll try to review patches
as they come in. I use reno and I like it. :)

-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-06-06 Thread Lance Bragstad
Also, with all the people involved with this thread, I'm curious what the
best way is to get consensus. If I've tallied the responses properly, we
have 5 in favor of option #2 and 1 in favor of option #3. This week is spec
freeze for keystone, so I see a slim chance of this getting committed to
Pike [0]. If we do have spare cycles across the team we could start working
on an early version and get eyes on it. If we straighten out everyone
concerns early we could land option #2 early in Queens.

I guess it comes down to how fast folks want it.

[0] https://review.openstack.org/#/c/464763/

On Tue, Jun 6, 2017 at 10:01 AM, Lance Bragstad  wrote:

> I replied to John, but directly. I'm sending the responses I sent to him
> but with the intended audience on the thread. Sorry for not catching that
> earlier.
>
>
> On Fri, May 26, 2017 at 2:44 AM, John Garbutt 
> wrote:
>
>> +1 on not forcing Operators to transition to something new twice, even if
>> we did go for option 3.
>>
>
> The more I think about this, the more it worries me from a developer
> perspective. If we ended up going with option 3, then we'd be supporting
> both methods of elevating privileges. That means two paths for doing the
> same thing in keystone. It also means oslo.context, keystonemiddleware, or
> any other library consuming tokens that needs to understand elevated
> privileges needs to understand both approaches.
>
>
>>
>> Do we have an agreed non-distruptive upgrade path mapped out yet? (For
>> any of the options) We spoke about fallback rules you pass but with a
>> warning to give us a smoother transition. I think that's my main objection
>> with the existing patches, having to tell all admins to get their token for
>> a different project, and give them roles in that project, all before being
>> able to upgrade.
>>
>
> Thanks for bringing up the upgrade case! You've kinda described an upgrade
> for option 1. This is what I was thinking for option 2:
>
> - deployment upgrades to a release that supports global role assignments
> - operator creates a set of global roles (i.e. global_admin)
> - operator grants global roles to various people that need it (i.e. all
> admins)
> - operator informs admins to create globally scoped tokens
> - operator rolls out necessary policy changes
>
> If I'm thinking about this properly, nothing would change at the
> project-scope level for existing users (who don't need a global role
> assignment). I'm hoping someone can help firm ^ that up or improve it if
> needed.
>
>
>>
>> Thanks,
>> johnthetubaguy
>>
>> On Fri, 26 May 2017 at 08:09, Belmiro Moreira <
>> moreira.belmiro.email.li...@gmail.com> wrote:
>>
>>> Hi,
>>> thanks for bringing this into discussion in the Operators list.
>>>
>>> Option 1 and 2 and not complementary but complety different.
>>> So, considering "Option 2" and the goal to target it for Queens I would
>>> prefer not going into a migration path in
>>> Pike and then again in Queens.
>>>
>>> Belmiro
>>>
>>> On Fri, May 26, 2017 at 2:52 AM, joehuang  wrote:
>>>
 I think a option 2 is better.

 Best Regards
 Chaoyi Huang (joehuang)
 --
 *From:* Lance Bragstad [lbrags...@gmail.com]
 *Sent:* 25 May 2017 3:47
 *To:* OpenStack Development Mailing List (not for usage questions);
 openstack-operat...@lists.openstack.org
 *Subject:* Re: [openstack-dev] 
 [keystone][nova][cinder][glance][neutron][horizon][policy]
 defining admin-ness

 I'd like to fill in a little more context here. I see three options
 with the current two proposals.

 *Option 1*

 Use a special admin project to denote elevated privileges. For those
 unfamiliar with the approach, it would rely on every deployment having an
 "admin" project defined in configuration [0].

 *How it works:*

 Role assignments on this project represent global scope which is
 denoted by a boolean attribute in the token response. A user with an
 'admin' role assignment on this project is equivalent to the global or
 cloud administrator. Ideally, if a user has a 'reader' role assignment on
 the admin project, they could have access to list everything within the
 deployment, pending all the proper changes are made across the various
 services. The workflow requires a special project for any sort of elevated
 privilege.

 Pros:
 - Almost all the work is done to make keystone understand the admin
 project, there are already several patches in review to other projects to
 consume this
 - Operators can create roles and assign them to the admin_project as
 needed after the upgrade to give proper global scope to their users

 Cons:
 - All global assignments are linked back to a single project
 - Describing the flow is confusing because in order to give someone
 global access you have to give them 

Re: [openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-06-06 Thread Matthew Thode
On 06/06/2017 03:21 AM, Dougal Matthews wrote:
> 
> 
> On 31 May 2017 at 09:35, Renat Akhmerov  > wrote:
> 
> 
> On 31 May 2017, 15:08 +0700, Thierry Carrez  >, wrote:
>>
>>> This has hit us with the mistral and tripleo projects particularly
>>> (tagged in the title). They disallow pbr-3.0.0 and in the case of
>>> mistral sqlalchemy updates.
>>>
>>> [mistral]
>>> mistral - blocking sqlalchemy - milestones
>>
>> I wonder why mistral is in requirements. Looks like tripleo-common is
>> depending on it ? Could someone shine some light on this ? It
>> might just
>> mean mistral-lib is missing a few functions, and switching the release
>> model of mistral itself might be overkill ?
> 
> This dependency is currently needed to create custom Mistral
> actions. It was originally not the best architecture and one of the
> reasons to create 'mistral-lib' was in getting rid of dependency on
> ‘mistral’ by moving all that’s needed for creating actions into a
> lib (plus something else). The thing is that the transition is not
> over and APIs that we put into ‘mistral-lib’ are still experimental.
> The plan is to complete this initiative, including docs and needed
> refactoring, till the end of Pike.
> 
> What possible negative consequences may we have if we switch release
> model to "cycle-with-intermediary”?
> 
> 
> I don't fully understand this, but I have one concern that I'll try and
> explain.
> 
> Mistral master is developed against master of other OpenStack projects
> (Keystone for auth, and all projects for OpenStack actions). If we were
> to release 5.0 today, it would mean that Mistral has a release that is
> tested against unreleased Pike but would need to work with Ocata stable
> releases (and AFAIK we do not tested Mistral master with Ocata Keystone
> etc.)
> 

This is true and what makes this hard, but the other
cycle-with-intermediary projects do the same thing (make releases when
using other projects master based releases).  So as long as your testing
is good I don't see a problem.

> We are very close to breaking the link between tripleo-common and
> mistral - I would favour that approach and would prefer a nasty hack to
> rush that along rather than changing Mistrals release cycle. I expect to
> remove mistral from requirements.txt after the transition anyway.
> 

This would be best, but how long will this take?  How long will mistral
be holding back sqlalchemy updates?

> What needs to happen to remove the dep?
> - RDO promotion to get a new mistral-lib release
> - After promotion this should start passing
> https://review.openstack.org/#/c/454719/
> - Port this functionality to tripleo-common
> https://github.com/openstack/mistral/blob/master/mistral/utils/openstack/keystone.py
> (we were planning on moving this to mistral-extra, but it could go into
> tripleo-common as a short term solution)
> 

Thanks for the update.

>  
> 
> 
> Renat Akhmerov
> @Nokia
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
Matthew Thode (prometheanfire)



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-06-06 Thread Lance Bragstad
I replied to John, but directly. I'm sending the responses I sent to him
but with the intended audience on the thread. Sorry for not catching that
earlier.


On Fri, May 26, 2017 at 2:44 AM, John Garbutt  wrote:

> +1 on not forcing Operators to transition to something new twice, even if
> we did go for option 3.
>

The more I think about this, the more it worries me from a developer
perspective. If we ended up going with option 3, then we'd be supporting
both methods of elevating privileges. That means two paths for doing the
same thing in keystone. It also means oslo.context, keystonemiddleware, or
any other library consuming tokens that needs to understand elevated
privileges needs to understand both approaches.


>
> Do we have an agreed non-distruptive upgrade path mapped out yet? (For any
> of the options) We spoke about fallback rules you pass but with a warning
> to give us a smoother transition. I think that's my main objection with the
> existing patches, having to tell all admins to get their token for a
> different project, and give them roles in that project, all before being
> able to upgrade.
>

Thanks for bringing up the upgrade case! You've kinda described an upgrade
for option 1. This is what I was thinking for option 2:

- deployment upgrades to a release that supports global role assignments
- operator creates a set of global roles (i.e. global_admin)
- operator grants global roles to various people that need it (i.e. all
admins)
- operator informs admins to create globally scoped tokens
- operator rolls out necessary policy changes

If I'm thinking about this properly, nothing would change at the
project-scope level for existing users (who don't need a global role
assignment). I'm hoping someone can help firm ^ that up or improve it if
needed.


>
> Thanks,
> johnthetubaguy
>
> On Fri, 26 May 2017 at 08:09, Belmiro Moreira <
> moreira.belmiro.email.li...@gmail.com> wrote:
>
>> Hi,
>> thanks for bringing this into discussion in the Operators list.
>>
>> Option 1 and 2 and not complementary but complety different.
>> So, considering "Option 2" and the goal to target it for Queens I would
>> prefer not going into a migration path in
>> Pike and then again in Queens.
>>
>> Belmiro
>>
>> On Fri, May 26, 2017 at 2:52 AM, joehuang  wrote:
>>
>>> I think a option 2 is better.
>>>
>>> Best Regards
>>> Chaoyi Huang (joehuang)
>>> --
>>> *From:* Lance Bragstad [lbrags...@gmail.com]
>>> *Sent:* 25 May 2017 3:47
>>> *To:* OpenStack Development Mailing List (not for usage questions);
>>> openstack-operat...@lists.openstack.org
>>> *Subject:* Re: [openstack-dev] [keystone][nova][cinder][
>>> glance][neutron][horizon][policy] defining admin-ness
>>>
>>> I'd like to fill in a little more context here. I see three options with
>>> the current two proposals.
>>>
>>> *Option 1*
>>>
>>> Use a special admin project to denote elevated privileges. For those
>>> unfamiliar with the approach, it would rely on every deployment having an
>>> "admin" project defined in configuration [0].
>>>
>>> *How it works:*
>>>
>>> Role assignments on this project represent global scope which is denoted
>>> by a boolean attribute in the token response. A user with an 'admin' role
>>> assignment on this project is equivalent to the global or cloud
>>> administrator. Ideally, if a user has a 'reader' role assignment on the
>>> admin project, they could have access to list everything within the
>>> deployment, pending all the proper changes are made across the various
>>> services. The workflow requires a special project for any sort of elevated
>>> privilege.
>>>
>>> Pros:
>>> - Almost all the work is done to make keystone understand the admin
>>> project, there are already several patches in review to other projects to
>>> consume this
>>> - Operators can create roles and assign them to the admin_project as
>>> needed after the upgrade to give proper global scope to their users
>>>
>>> Cons:
>>> - All global assignments are linked back to a single project
>>> - Describing the flow is confusing because in order to give someone
>>> global access you have to give them a role assignment on a very specific
>>> project, which seems like an anti-pattern
>>> - We currently don't allow some things to exist in the global sense
>>> (i.e. I can't launch instances without tenancy), the admin project could
>>> own resources
>>> - What happens if the admin project disappears?
>>> - Tooling or scripts will be written around the admin project, instead
>>> of treating all projects equally
>>>
>>> *Option 2*
>>>
>>> Implement global role assignments in keystone.
>>>
>>> *How it works:*
>>>
>>> Role assignments in keystone can be scoped to global context. Users can
>>> then ask for a globally scoped token
>>>
>>> Pros:
>>> - This approach represents a more accurate long term vision for role
>>> assignments (at least how we understand it today)
>>> - Operators can create global 

Re: [openstack-dev] [nova][scheduler][placement] Allocating Complex Resources

2017-06-06 Thread Chris Dent

On Mon, 5 Jun 2017, Ed Leafe wrote:


One proposal is to essentially use the same logic in placement
that was used to include that host in those matching the
requirements. In other words, when it tries to allocate the amount
of disk, it would determine that that host is in a shared storage
aggregate, and be smart enough to allocate against that provider.
This was referred to in our discussion as "Plan A".


What would help for me is greater explanation of if and if so, how and
why, "Plan A" doesn't work for nested resource providers.

We can declare that allocating for shared disk is fairly deterministic
if we assume that any given compute node is only associated with one
shared disk provider.

My understanding is this determinism is not the case with nested
resource providers because there's some fairly late in the game
choosing of which pci device or which numa cell is getting used.
The existing resource tracking doesn't have this problem because the
claim of those resources is made very late in the game. < Is this
correct?

The problem comes into play when we want to claim from the scheduler
(or conductor). Additional information is required to choose which
child providers to use. <- Is this correct?

Plan B overcomes the information deficit by including more
information in the response from placement (as straw-manned in the
etherpad [1]) allowing code in the filter scheduler to make accurate
claims. <- Is this correct?

For clarity and completeness in the discussion some questions for
which we have explicit answers would be useful. Some of these may
appear ignorant or obtuse and are mostly things we've been over
before. The goal is to draw out some clear statements in the present
day to be sure we are all talking about the same thing (or get us
there if not) modified for what we know now, compared to what we
knew a week or month ago.

* We already have the information the filter scheduler needs now by
  some other means, right?  What are the reasons we don't want to
  use that anymore?

* Part of the reason for having nested resource providers is because
  it can allow affinity/anti-affinity below the compute node (e.g.,
  workloads on the same host but different numa cells). If I
  remember correctly, the modelling and tracking of this kind of
  information in this way comes out of the time when we imagined the
  placement service would be doing considerably more filtering than
  is planned now. Plan B appears to be an acknowledgement of "on
  some of this stuff, we can't actually do anything but provide you
  some info, you need to decide". If that's the case, is the
  topological modelling on the placement DB side of things solely a
  convenient place to store information? If there were some other
  way to model that topology could things currently being considered
  for modelling as nested providers be instead simply modelled as
  inventories of a particular class of resource?
  (I'm not suggesting we do this, rather that the answer that says
  why we don't want to do this is useful for understanding the
  picture.)

* Does a claim made in the scheduler need to be complete? Is there
  value in making a partial claim from the scheduler that consumes a
  vcpu and some ram, and then in the resource tracker is corrected
  to consume a specific pci device, numa cell, gpu and/or fpga?
  Would this be better or worse than what we have now? Why?

* What is lacking in placement's representation of resource providers
  that makes it difficult or impossible for an allocation against a
  parent provider to be able to determine the correct child
  providers to which to cascade some of the allocation? (And by
  extension make the earlier scheduling decision.)

That's a start. With answers to at last some of these questions I
think the straw man in the etherpad can be more effectively
evaluated. As things stand right now it is a proposed solution
without a clear problem statement. I feel like we could do with a
more clear problem statement.

Thanks.

[1] https://etherpad.openstack.org/p/placement-allocations-straw-man

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] notification update week 23

2017-06-06 Thread Balazs Gibizer

Hi,

After two weeks of silence here is the status update / fucus setting 
mail about notification work for week 23.


Bugs

[Medium] https://bugs.launchpad.net/nova/+bug/1657428 The instance 
notifications are sent with inconsistent timestamp format. One of the 
prerequisite patches needs some discussion 
https://review.openstack.org/#/q/topic:bug/1657428


[New] https://bugs.launchpad.net/nova/+bug/1684860 Versioned server 
notifications don't include updated_at
We missed the update_at field during the transformation of the instance 
notifications.



Versioned notification transformation
-
The following patches are waiting for core review:
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/versioned-notification-transformation-pike+label:Code-Review%253E%253D%252B1+label:Verified%253E%253D1+AND+NOT+label:Code-Review%253C0

Searchlight integration
---
bp additional-notification-fields-for-searchlight
~
The patch series adding keypairs, BDMs and tags needs a rebase:
https://review.openstack.org/#/q/topic:bp/additional-notification-fields-for-searchlight+status:open


Small improvements
~~
* https://review.openstack.org/#/c/428199/ Improve assertJsonEqual
error reporting
* https://review.openstack.org/#/c/450787/ remove ugly local import

* https://review.openstack.org/#/q/topic:refactor-notification-samples
Factor out duplicated notification sample data
This is a start of a longer patch series to deduplicate notification
sample data. The third patch already shows how much sample data can be
deleted from nova tree. We added a minimal hand rolled json ref
implementation to notification sample test as the existing python json
ref implementations are not well maintained.


Weekly meeting
--
The notification subteam holds it's weekly meeting on Tuesday 17:00 UTC
on openstack-meeting-4. The next meeting will be held on 6th of June.
https://www.timeanddate.com/worldclock/fixedtime.html?iso=20170606T17

Cheers,
gibi


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [reno] we need help reviewing patches to reno

2017-06-06 Thread Doug Hellmann
I am looking for one or two people interested in learning about how reno
works to help with reviews. If you like graph traversal algorithms
and/or text processing, have a look at the code in the openstack/reno
repository and let me know if you're interested in helping out.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla][tripleo][docker][containers] COW fs for docker images cache and registry, looking for solutions

2017-06-06 Thread Bogdan Dobrelya
Hello all.
I'm researching the subject, which is 'on-line de-duplicating' the space
consumed by cached docker images and pushed into a local
private registry on the same host, like using fs with COW (btrfs/zfs?).
There is related bug for TripleO [0].

Could you please share some ideas or perhaps solutions already
implemented somewhere? It seems there is not much info wrt this topic
available on the web.

[0] https://bugs.launchpad.net/bugs/1694709

-- 
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [os-upstream-institute] Alternating meeting slot

2017-06-06 Thread Ildiko Vancsa
Hi Training Team,

As we have many team members from Europe and Asia we are now switching to 
alternating meeting slots [1] starting from next week.

On even weeks our meeting slot is 0900 UTC on Tuesdays, while on odd weeks it’s 
2000 UTC on Mondays, all on #openstack-meeting-3.

Our meeting next meeting will be on next Tuesday (June 13) at 0900 UTC on the 
#openstack-meeting-3 channel.

Thanks,
Ildikó

[1] http://eavesdrop.openstack.org/#OpenStack_Upstream_Institute


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][placement] Allocating Complex Resources

2017-06-06 Thread Sylvain Bauza


Le 06/06/2017 15:03, Edward Leafe a écrit :
> On Jun 6, 2017, at 4:14 AM, Sylvain Bauza  > wrote:
>>
>> The Plan A option you mention hides the complexity of the
>> shared/non-shared logic but to the price that it would make scheduling
>> decisions on those criteries impossible unless you put
>> filtering/weighting logic into Placement, which AFAIK we strongly
>> disagree with.
> 
> Not necessarily. Well, not now, for sure, but that’s why we need Traits
> to be integrated into Flavors as soon as possible so that we can make
> requests with qualitative requirements, not just quantitative. When that
> work is done, we can add traits to differentiate local from shared
> storage, just like we have traits to distinguish HDD from SSD. So if a
> VM with only local disk is needed, that will be in the request, and
> placement will never return hosts with shared storage. 
> 

Well, there is a whole difference between defining constraints into
flavors, and making a general constraint on a filter basis which is
opt-able by config.

Operators could claim that they would need to update all their N flavors
in order to achieve a strict separation for not-shared-with resource
providers, which would somehow leak into the fact that users would have
flavors that differ for that aspect.

I'm not saying it's not good to mark traits into flavor extraspecs,
sometimes they're all good, but I do care of the flavor count explosion
if we begin putting all the filtering logic into extraspecs (plus the
fact it can't be config-manageable like filters are at the moment).

-Sylvain

> -- Ed Leafe
> 
> 
> 
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][placement] Allocating Complex Resources

2017-06-06 Thread Edward Leafe
On Jun 6, 2017, at 4:14 AM, Sylvain Bauza  wrote:
> 
> The Plan A option you mention hides the complexity of the
> shared/non-shared logic but to the price that it would make scheduling
> decisions on those criteries impossible unless you put
> filtering/weighting logic into Placement, which AFAIK we strongly
> disagree with.


Not necessarily. Well, not now, for sure, but that’s why we need Traits to be 
integrated into Flavors as soon as possible so that we can make requests with 
qualitative requirements, not just quantitative. When that work is done, we can 
add traits to differentiate local from shared storage, just like we have traits 
to distinguish HDD from SSD. So if a VM with only local disk is needed, that 
will be in the request, and placement will never return hosts with shared 
storage. 

-- Ed Leafe





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Future of the tripleo-quickstart-utils project

2017-06-06 Thread Raoul Scarazzini
On 17/05/2017 10:47, Bogdan Dobrelya wrote:
[...]> It is amusing a little bit as it looks a controversial to the
> "Validations before upgrades and updates" effort. Shall we just move the
> tripleo-quickstart-utils back to extras, or to validations repo and have
> both issues solved? :)

The reason why this was put outside extras was that basically no one was
looking at the reviews, since the topic was so specific that no one had
the opportunity to test the modifications on the field.
So we decided to move it outside, to be quick and independent on the
reviews.

[...]
> A side note, this peculiar way to use ansible is a deliberate move for
> automatic documenting of reproducing steps. So those jinja templated
> scripts could be as well used aside of the ansible playbooks. It looked
> odd to me as well, but I tend to agree that is an interesting solution
> for automagic documentation builds.

I need to understand in depth this automatic documenting you're writing
about. Can you give some tip to fully comprehend what you wrote?

Many thanks, and sorry for the long delay between the answers.

-- 
Raoul Scarazzini
ra...@redhat.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Future of the tripleo-quickstart-utils project

2017-06-06 Thread Raoul Scarazzini
On 17/05/2017 04:01, Emilien Macchi wrote:
> Hey Raoul,
> Thanks for putting this up in the ML. Replying inline:

Sorry for the long delay between the answers, a lot of things on going.

[...]
> I've looked at 
> https://github.com/redhat-openstack/tripleo-quickstart-utils/blob/master/roles/validate-ha/tasks/main.yml
> and I see it's basically a set of tasks that validates that HA is
> working well on the overcloud.
> Despite little things that might be adjusted (calling bash scripts
> from Ansible), I think this role would be a good fit with
> tripleo-validations projects, which is "a collection of Ansible
> playbooks to detect and report potential issues during TripleO
> deployments".

Moving this stuff in the tripleo-validations project would impose a
massive change in the approach HA validation is made today.
The bash script way is something that was used to make someone able to
do the validation even without ansible. Anyone could write his test by
just adding the script inside the test (and recovery) dir.
This is the tech reason behind the choice, and today this is doing great
as it is.
So I think that until I can reserve a slot to make this "port" this can
stay where it is today.

>> 2 - stonith-config: to configure STONITH inside an HA env;
[...]> Great, it means we could easily re-use the bits, modulo some
technical
> adjustments.

Since we're moving into integrating stonith and (hopefully) instance HA
directly inside tripleo, then this can stay where it is today, it would
be useless giving effort in putting this since soon we will have the
same directly inside tripleo.

>> There's also a docs related to the Multi Virtual Undercloud project [4]
>> that explains how to have more than one virtual Undercloud on a physical
>> machine to manage more environments from the same place.
> I would suggest to move it to tripleo-docs, so we have a single place for doc.

Action item for me here: move this document under tripleo-docs. I'm
already preparing a review for this.

[...]
> IIRC, everything in this repo could be moved to existing projects in
> TripleO that are already productized, so little efforts would be done.
[...]> Thanks for bringing this up!

Agreed.

Bye,

-- 
Raoul Scarazzini
ra...@redhat.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [Pycharm] Pycharm Debugging

2017-06-06 Thread Giuseppe Di Lena
Missing title
> Il giorno 06 giu 2017, alle ore 13:55, Giuseppe Di Lena 
>  ha scritto:
> 
> Hello guys,
> are some of you using PyCharm to debugging and testing nova?
> 
> I've cloned the Nova Project in my PC, and I want to run some unit tests with 
> the debugger.
> I searched in these days, but I found nothing.
> 
> Do you have some suggestions?
> 
> Best regards Giuseppe


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [Pycharm]

2017-06-06 Thread Giuseppe Di Lena
Hello guys,
are some of you using PyCharm to debugging and testing nova?

I've cloned the Nova Project in my PC, and I want to run some unit tests with 
the debugger.
I searched in these days, but I found nothing.

Do you have some suggestions?

Best regards Giuseppe
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] [glance] [cinder] [neutron] - Global Request ID progress

2017-06-06 Thread Sean Dague
Some good progress has been made so far on Global Request ID work in the
core IaaS layer, here is where we stand.

STATUS

oslo.context / oslo.middleware - everything DONE

devstack logging additional global_request_id - DONE

cinder:
- client supports global_request_id - DONE
- Cinder calls Nova with global_request_id - TODO (waiting on Novaclient
release)
- Cinder calls Glance with global_request_id - TODO

neutron:
- client supports global_request_id - IN PROGRESS (this landed,
released, but the neutron client release had to be blocked for unrelated
issues).
- Neutron calls Nova with global_request_id - TODO (waiting on
Novaclient release)

nova:
- Convert to oslo.middleware (to accept global_request_id) - DONE
- client supports global_request_id - IN PROGRESS (waiting for release
here - https://review.openstack.org/#/c/471323/)
- Nova calls cinder with global_request_id - DONE
- Nova calls neutron with global_request_id - TODO (waiting on working
neutronclient release)
- Nova calls Glance with global request id - IN PROGRESS (review needs
final +2 here https://review.openstack.org/#/c/467242/)

glance:
- client supports global_request_id - DONE
- Glance supports setting global_request_id - IN REVIEW
(https://review.openstack.org/#/c/468443/) *(some debate on this).


Everything except the last glance change is uncontroversial, and it's
just mechanics and project management to get things through in the
correct order.


The Glance support for global_request_id has hit a bump in the review
process as there is a concern that it's changing the API. Though from an
end user perspective that's not happening, it's just changing which
field things get logged into. We'll see if we can work through that.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][vlan trunking] Guest networking configuration for vlan trunk

2017-06-06 Thread Bence Romsics
Hi Robert,

I'm late to this thread, but let me add a bit. There was an attempt
for trunk support in nova metadata on the Pike PTG:

https://review.openstack.org/399076

But that was abandoned right after the PTG, because the agreement
seemed to be in favor of putting the trunk details into the upcoming
os-vif object. The os-vif object was supposed to be described in a new
patch set to this change:

https://review.openstack.org/390513

Unfortunately there's not much happening there since. Looking back now
it seems to me that turning the os-vif object into a prerequisite made
this work too big to ever happen. I definitely didn't have the time to
take that on.

But anyway I hope the abandoned spec may provide relevant input to you.

Cheers,
Bence

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] overcloud containers patches todo

2017-06-06 Thread Jiří Stránský

On 5.6.2017 23:52, Dan Prince wrote:

On Mon, 2017-06-05 at 16:11 +0200, Jiří Stránský wrote:

On 5.6.2017 08:59, Sagi Shnaidman wrote:

Hi
I think a "deep dive" about containers in TripleO and some helpful
documentation would help a lot for valuable reviews of these
container
patches. The knowledge gap that's accumulated here is pretty big.


As per last week's discussion [1], i hope this is something i could
do.
I'm drafting a preliminary agenda in this etherpad, feel free to add
more suggestions if i missed something:

https://etherpad.openstack.org/p/tripleo-deep-dive-containers

My current intention is to give a fairly high level view of the
TripleO
container land: from deployment, upgrades, debugging failed CI jobs,
to
how CI itself was done.

I'm hoping we could make it this Thursday still. If that's too short
of
a notice for several folks or if i hit some trouble with preparation,
we
might move it to 15th. Any feedback is welcome of course.


Nice Jirka. Thanks for organizing this!

Dan


Sure thing. I'll do it on 15th indeed as a couple more topics appeared 
and i'd like to familiarize myself with some details (alongside doing 
normal work :) ).


Jirka





Have a good day,

Jirka



Thanks

On Jun 5, 2017 03:39, "Dan Prince"  wrote:


Hi,

Any help reviewing the following patches for the overcloud
containerization effort in TripleO would be appreciated:

https://etherpad.openstack.org/p/tripleo-containers-todo

If you've got new services related to the containerization
efforts feel
free to add them here too.

Thanks,

Dan

_
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:un
subscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





___
___
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsu
bscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




_
_
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
cribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] revised Postgresql deprecation patch for governance

2017-06-06 Thread Chris Dent


For people who have been following this topic, a reminder that later
today there will be a TC meeting dedicated to discussing this issues
captured by this thread[1], the related thread on active or passive
database approaches[2] and the two reviews about "what to do about
postgreSQL" [3][4].

It will be today (6 June) at 20.00 UTC.

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-May/116642.html
[2] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117148.html
[3] https://review.openstack.org/#/c/427880/
[4] https://review.openstack.org/#/c/465589/

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][placement] Allocating Complex Resources

2017-06-06 Thread Sylvain Bauza
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256



Le 05/06/2017 23:22, Ed Leafe a écrit :
> We had a very lively discussion this morning during the Scheduler
> subteam meeting, which was continued in a Google hangout. The
> subject was how to handle claiming resources when the Resource
> Provider is not "simple". By "simple", I mean a compute node that
> provides all of the resources itself, as contrasted with a compute
> node that uses a shared storage for disk space, or which has
> complex nested relationships with things such as PCI devices or
> NUMA nodes. The current situation is as follows:
> 
> a) scheduler gets a request with certain resource requirements
> (RAM, disk, CPU, etc.) b) scheduler passes these resource
> requirements to placement, which returns a list of hosts (compute
> nodes) that can satisfy the request. c) scheduler runs these
> through some filters and weighers to get a list ordered by best
> "fit" d) it then tries to claim the resources, by posting to
> placement allocations for these resources against the selected
> host e) once the allocation succeeds, scheduler returns that host
> to conductor to then have the VM built
> 
> (some details for edge cases left out for clarity of the overall
> process)
> 
> The problem we discussed comes into play when the compute node
> isn't the actual provider of the resources. The easiest example to
> consider is when the computes are associated with a shared storage
> provider. The placement query is smart enough to know that even if
> the compute node doesn't have enough local disk, it will get it
> from the shared storage, so it will return that host in step b)
> above. If the scheduler then chooses that host, when it tries to
> claim it, it will pass the resources and the compute node UUID back
> to placement to make the allocations. This is the point where the
> current code would fall short: somehow, placement needs to know to
> allocate the disk requested against the shared storage provider,
> and not the compute node.
> 
> One proposal is to essentially use the same logic in placement that
> was used to include that host in those matching the requirements.
> In other words, when it tries to allocate the amount of disk, it
> would determine that that host is in a shared storage aggregate,
> and be smart enough to allocate against that provider. This was
> referred to in our discussion as "Plan A".
> 
> Another proposal involved a change to how placement responds to the
> scheduler. Instead of just returning the UUIDs of the compute nodes
> that satisfy the required resources, it would include a whole bunch
> of additional information in a structured response. A straw man
> example of such a response is here:
> https://etherpad.openstack.org/p/placement-allocations-straw-man.
> This was referred to as "Plan B". The main feature of this approach
> is that part of that response would be the JSON dict for the
> allocation call, containing the specific resource provider UUID for
> each resource. This way, when the scheduler selects a host, it
> would simply pass that dict back to the /allocations call, and
> placement would be able to do the allocations directly against that
> information.
> 
> There was another issue raised: simply providing the host UUIDs
> didn't give the scheduler enough information in order to run its
> filters and weighers. Since the scheduler uses those UUIDs to
> construct HostState objects, the specific missing information was
> never completely clarified, so I'm just including this aspect of
> the conversation for completeness. It is orthogonal to the question
> of how to allocate when the resource provider is not "simple".
> 
> My current feeling is that we got ourselves into our existing mess
> of ugly, convoluted code when we tried to add these complex
> relationships into the resource tracker and the scheduler. We set
> out to create the placement engine to bring some sanity back to how
> we think about things we need to virtualize. I would really hate to
> see us make the same mistake again, by adding a good deal of
> complexity to handle a few non-simple cases. What I would like to
> avoid, no matter what the eventual solution chosen, is representing
> this complexity in multiple places. Currently the only two
> candidates for this logic are the placement engine, which knows
> about these relationships already, or the compute service itself,
> which has to handle the management of these complex virtualized
> resources.
> 
> I don't know the answer. I'm hoping that we can have a discussion
> that might uncover a clear approach, or, at the very least, one
> that is less murky than the others.
> 

I wasn't part neither of the scheduler meeting nor the hangout (hitted
by French holiday) so I don't get all the details in mind and I could
probably make wrong assumptions, so I apology in advance if I'm
telling anything silly.

That said, I still have some opinions and I'll put them here. Thanks
for having brought 

Re: [openstack-dev] [requirements][mistral][tripleo][horizon][nova][releases] release models for projects tracked in global-requirements.txt

2017-06-06 Thread Dougal Matthews
On 31 May 2017 at 09:35, Renat Akhmerov  wrote:

>
> On 31 May 2017, 15:08 +0700, Thierry Carrez ,
> wrote:
>
>
> This has hit us with the mistral and tripleo projects particularly
> (tagged in the title). They disallow pbr-3.0.0 and in the case of
> mistral sqlalchemy updates.
>
> [mistral]
> mistral - blocking sqlalchemy - milestones
>
>
> I wonder why mistral is in requirements. Looks like tripleo-common is
> depending on it ? Could someone shine some light on this ? It might just
> mean mistral-lib is missing a few functions, and switching the release
> model of mistral itself might be overkill ?
>
>
> This dependency is currently needed to create custom Mistral actions. It
> was originally not the best architecture and one of the reasons to create
> 'mistral-lib' was in getting rid of dependency on ‘mistral’ by moving all
> that’s needed for creating actions into a lib (plus something else). The
> thing is that the transition is not over and APIs that we put into
> ‘mistral-lib’ are still experimental. The plan is to complete this
> initiative, including docs and needed refactoring, till the end of Pike.
>
> What possible negative consequences may we have if we switch release model
> to "cycle-with-intermediary”?
>

I don't fully understand this, but I have one concern that I'll try and
explain.

Mistral master is developed against master of other OpenStack projects
(Keystone for auth, and all projects for OpenStack actions). If we were to
release 5.0 today, it would mean that Mistral has a release that is tested
against unreleased Pike but would need to work with Ocata stable releases
(and AFAIK we do not tested Mistral master with Ocata Keystone etc.)

We are very close to breaking the link between tripleo-common and mistral -
I would favour that approach and would prefer a nasty hack to rush that
along rather than changing Mistrals release cycle. I expect to remove
mistral from requirements.txt after the transition anyway.

What needs to happen to remove the dep?
- RDO promotion to get a new mistral-lib release
- After promotion this should start passing
https://review.openstack.org/#/c/454719/
- Port this functionality to tripleo-common
https://github.com/openstack/mistral/blob/master/mistral/utils/openstack/keystone.py
(we were planning on moving this to mistral-extra, but it could go into
tripleo-common as a short term solution)



>
> Renat Akhmerov
> @Nokia
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] How to add a release notes file to Vitrage

2017-06-06 Thread Afek, Ifat (Nokia - IL/Kfar Sava)
Hi,

In order to add a release notes file to Vitrage, please perform the following:
$ cd /opt/stack/vitrage
$ reno new my_new_feature
Created new notes file in 
releasenotes/notes/my_new_feature-5e88573b6d99abbb.yaml

Edit the file, remove everything that you don’t need, and write 3-4 lines of 
description under ‘features’.
Example: 
https://github.com/openstack/vitrage/blob/master/releasenotes/notes/collectd-datasource-a730f06aff840c8f.yaml
Ocata release notes: https://docs.openstack.org/releasenotes/vitrage/ocata.html

Thanks,
Ifat.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] First Vitrage Pike release by the end of this week

2017-06-06 Thread Afek, Ifat (Nokia - IL/Kfar Sava)
Hi,

Pike-2 milestone is at the end of this week, and although we are not working by 
the milestones model (we are working by a cycle-with-intermediary model) we 
need to have the first Vitrage Pike release by the end of this week.

I would like to release vitrage, python-vitrageclient and vitrage-dashboard 
tomorrow. Any objections? Please let me know if you think something has to be 
changed/added before the release. 

Also, we need to add release notes for the newly added features. This list 
includes (let me know if I missed something):

vitrage 
• Vitrage ID
• Support ‘not’ operator in the evaluator templates
• Performance improvements
• Support entity equivalences
• SNMP notifier

python-vitrageclient
• Multi tenancy support
• Resources API

vitrage-dashboard
• Multi tenancy support – Vitrage in admin menu
• Added ‘search’ option in the entity graph

Please add a release notes file for each of your features (I’ll send an 
explanation in a separate mail), or send me a few lines of the feature’s 
description and I’ll add it.

Thanks,
Ifat.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev