Re: [openstack-dev] [nova][scheduler] Anyone relying on the host_subset_size config option?

2017-05-26 Thread Rui Chen
Beside eliminate race conditions, we use host_subnet_size in the special
cases, we have different capacity hardware in a deployment,
imagine a simple case, two compute hosts(RAM 48G vs 16G free), only enable
the RAM weighter for nova-scheduler, if we launch
10 instances(RAM 1G flavor) one by one, all the 10 instances will be
launched on the 48G RAM compute hosts, that don't we want,
host_subset_size help to distribute load to random available hosts in the
situation.

Thank you sending the mail to operators list, let us to get more feedback
before doing some changes.

2017-05-27 4:46 GMT+08:00 Ben Nemec :

>
>
> On 05/26/2017 12:17 PM, Edward Leafe wrote:
>
>> [resending to include the operators list]
>>
>> The host_subset_size configuration option was added to the scheduler to
>> help eliminate race conditions when two requests for a similar VM would be
>> processed close together, since the scheduler’s algorithm would select the
>> same host in both cases, leading to a race and a likely failure to build
>> for the second request. By randomly choosing from the top N hosts, the
>> likelihood of a race would be reduced, leading to fewer failed builds.
>>
>> Current changes in the scheduling process now have the scheduler claiming
>> the resources as soon as it selects a host. So in the case above with 2
>> similar requests close together, the first request will claim successfully,
>> but the second will fail *while still in the scheduler*. Upon failing the
>> claim, the scheduler will simply pick the next host in its weighed list
>> until it finds one that it can claim the resources from. So the
>> host_subset_size configuration option is no longer needed.
>>
>> However, we have heard that some operators are relying on this option to
>> help spread instances across their hosts, rather than using the RAM
>> weigher. My question is: will removing this randomness from the scheduling
>> process hurt any operators out there? Or can we safely remove that logic?
>>
>
> We used host_subset_size to schedule randomly in one of the TripleO CI
> clouds.  Essentially we had a heterogeneous set of hardware where the
> numerically larger (more RAM, more disk, equal CPU cores) systems were
> significantly slower.  This caused them to be preferred by the scheduler
> with a normal filter configuration, which is obviously not what we wanted.
> I'm not sure if there's a smarter way to handle it, but setting
> host_subset_size to the number of compute nodes and disabling basically all
> of the weighers allowed us to equally distribute load so at least the slow
> nodes weren't preferred.
>
> That said, we're migrating away from that frankencloud so I certainly
> wouldn't block any scheduler improvements on it.  I'm mostly chiming in to
> describe a possible use case.  And please feel free to point out if there's
> a better way to do this. :-)
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] cells v1 job is at 100% fail on master

2017-05-26 Thread Matt Riedemann

Fix is proposed here:

https://review.openstack.org/#/c/468585/

This is just FYI so people aren't needlessly rechecking.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral][freezer] adopting oslo.context for logging debugging and tracing

2017-05-26 Thread Vitaliy Nogin
Hi Doug,

Anyway, thank for the notification. We are really appreciated.

Regards,
Vitaliy

> 26 мая 2017 г., в 20:54, Doug Hellmann  написал(а):
> 
> Excerpts from Saad Zaher's message of 2017-05-26 12:03:24 +0100:
>> Hi Doug,
>> 
>> Thanks for your review. Actually freezer has a separate repo for the api,
>> it can be found here [1]. Freezer is using oslo.context since newton. If
>> you have the time you can take a look at it and let us know if you have any
>> comments.
> 
> Ah, that explains why I couldn't find it in the freezer repo. :-)
> 
> Doug
> 
>> 
>> Thanks for your help
>> 
>> [1] https://github.com/openstack/freezer-api
>> 
>> Best Regards,
>> Saad!
>> 
>> On Fri, May 26, 2017 at 5:45 AM, Renat Akhmerov 
>> wrote:
>> 
>>> Thanks Doug. We’ll look into this.
>>> 
>>> @Tuan, is there any roadblocks with the patch you’re working on? [1]
>>> 
>>> [1] https://review.openstack.org/#/c/455407/
>>> 
>>> 
>>> Renat
>>> 
>>> On 26 May 2017, 01:54 +0700, Doug Hellmann , wrote:
>>> 
>>> The new work to add the exception information and request ID tracing
>>> depends on using both oslo.context and oslo.log to have all of the
>>> relevant pieces of information available as log messages are emitted.
>>> 
>>> In the course of reviewing the "done" status for those initiatives,
>>> I noticed that although mistral and freezer are using oslo.log,
>>> neither uses oslo.context. That means neither project will get the
>>> extra debugging information, and neither project will see the global
>>> request ID in logs.
>>> 
>>> I started looking at updating mistral's context to use oslo.context
>>> as a base class, but ran into some issues because of some extensions
>>> made to the existing class. I wasn't able to find where freezer is
>>> doing anything at all with an API request context.
>>> 
>>> I'm available to help, if someone else wants to pick up the work.
>>> 
>>> Doug
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> 
>>> 
>> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] new additional team meeting

2017-05-26 Thread Clay Gerrard
On Fri, May 26, 2017 at 4:06 PM, John Dickinson  wrote:

>
> The new meeting is at a reasonable time for just about everyone, other
> than those who live in New York to San Francisco time zones.


define *un*-reasonable ;)  Regardless we'll have the logs.


>
> I'd like to thank Mahati for leading the group for organizing and
> facilitating this new idea.
>

+1


> I'm looking forward to seeing how this will help our team communicate and
> grow.
>
>
It'll be great!

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] new additional team meeting

2017-05-26 Thread John Dickinson
In Boston, one of the topics is how to better facilitate communication in our 
global community. Like some other OpenStack projects, we've decided to add an 
additional meeting that is better scheduled for different time zones.

Our current meeting is remaining: 2100UTC on Wednesdays in #openstack-meeting.

We are adding another biweekly meeting starting this next week (May 31): 
0700UTC on Wednesdays in #openstack-meeting.

The new meeting is at a reasonable time for just about everyone, other than 
those who live in New York to San Francisco time zones. In this new meeting, 
we'll be addressing specific patches and concerns that those in attendance 
have. We'll also be using the time to help raise issues and discuss topics that 
pertain to the entire community.

I'd like to thank Mahati for leading the group for organizing and facilitating 
this new idea.

Although a new meeting will bring new challenges, I'm looking forward to seeing 
how this will help our team communicate and grow.

--John





signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Passing along some field feedback

2017-05-26 Thread Alex Schultz
On Fri, May 26, 2017 at 3:34 PM, Ben Nemec  wrote:
> Hi,
>
> As some of you may know, I have a weekly meeting with some of the Red Hat
> folks who are out in the field using TripleO to deploy OpenStack at customer
> sites.  The past couple of weeks have been particularly interesting, so I
> thought I'd take a moment to disseminate the comments to the broader
> developer community.  I'll try to keep it from getting too wall-of-texty,
> but no promises. :-)
>
> Here goes:
>
> * There was interest in using split stack to avoid having to use TripleO's
> baremetal provisioning.  I let them know that technically split stack was
> possible in Newton and that further usability improvements were coming in
> Ocata and Pike.  Keep in mind that for many of these companies Newton is
> still a new shiny thing that they're looking to move onto, so even though
> this functionality seems like it's been around for a long time to us that
> isn't necessarily the case in the field.
>
> * Some concerns around managing config with TripleO were raised,
> specifically two points: 1) Stack updates can cause service outages, even if
> you're just looking to change one minor config value in a single service.
> 2) Stack updates take a long time.  45 minutes to apply one config value
> seems a little heavy from a user perspective.
>

Yea we keep trying to work on 1 and I think we're in a good spot for
that (at least better than previous releases). But it's also on
developers to ensure when you add/change when something is run that
you are doing it in the right step to make sure configurations don't
get removed/re-added on updates. This is particularly bad with
services that are fronted by apache as that occurs in Step 3.  If you
add your service in Step 4, on updates it's removed from the
configuration (service restart) in step 3 and then reapplied in step 4
(service restart).

> For 1, I did mention that we had been working to reduce the number of
> spurious service restarts on update.  2 is obviously a little trickier. One
> thing that was requested was a mode where a Puppet step doesn't get run
> unless something changes - which is the exact opposite of the feedback we
> got initially on TripleO.  Back then people wanted a stack update to assert
> state every time, like regular Puppet would.  There probably weren't enough
> people in this discussion to take immediate action on this, but I thought it
> was an interesting potential change in philosophy.  Also it drives home the
> need for better performance of TripleO as a whole.
>

So the problem with the 'only run if something changes' is that it
assumes nothing else has changed outside of the configuration managed
by tripleo. And as we all know, people login to servers and mess with
settings and then forgot they left them like that.  The only way we
can be assured the state is correct is to go through the entire update
process.  So I think this goes back to the way folks manage their
configurations and when you start getting two conflicting tools that
manage things slightly differently you end up with operational
problems.  I would say it would be better to work on shortening the
actual update process time rather than increasing the complexity by
trying to add in support to not always run something.  That being
said, if heat could determine when the compiled data has changed
between updates and only run on those nodes, maybe it wouldn't be so
hard.

> * For large scale deployments there was interest in leveraging the
> oslo.messaging split RPC/notifications.  Initially just separate Rabbit
> clusters, but potentially in the future completely different drivers that
> better suit the two different use cases.  Since this is something I believe
> operators are already using in the field, it would be good to get support
> for it in TripleO.  I suggested they file an RFE, and I went ahead and
> opened a wishlist bug upstream:
> https://bugs.launchpad.net/tripleo/+bug/1693928
>

It's in the underlying puppet modules so technically it could be
deployed if you can setup two distinct messaging infrastructures with
tripleo.

> And although I have a bunch more, I'm going to stop here in an attempt to
> avoid complete information overload.  I'll send another email (or two...)
> another day with some of the other topics that came up.
>
> Thanks.
>
> -Ben
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] no upgrades meeting on May 29th

2017-05-26 Thread Ihar Hrachyshka
It's Memorial day in US.

Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] deprecating the policy and credential APIs

2017-05-26 Thread Adrian Turjak
So I've actually been using the credentials API for some of my work
towards MFA, different types of MFA, and even different stages for MFA.

For example (totp in this case), I first have a service create a user's
totp secret as type 'totp-draft' so that the totp auth method can't use
it, but so that my service can store and then access it in Keystone to
do an initial challenge response before making it type 'totp' so it can
be used for MFA.

I'm also playing with a credential type for MFA of the type CIDR which
takes a CIDR formatted ip address, and allows an additional auth factor
which is source ip address. In the auth module we check that the CIDR
credentials for the user match the proxy forwarded source IP. So for
service/automated accounts you can set a range of ips that it can auth
from. This is useful because often these could have a lot of power but
since they are automated TOTP is not a valid multi-factor auth.

So, for me, the flexibility of the credentials API is really useful. I'm
trying to find useful/different MFA options, and credentials is a great
place to store data about them, so I want/need something like it. If
moved to the user object, and we make it flexible rather than hard .totp
or .ec2 values, I'm all for it. Maybe user.credentials and have that act
as a mini credentials manager of sorts, or a mini credentials API on a
per user basis.

I'd love to help here, but I've been swamped as is. I haven't even had
time to even properly finish/continue work on upstream Keystone MFA in
ages. So I only tentatively put my hand up for helping with this!


Following that, the way we handle ec2 currently is fairly awful. The
access secret is pretty much a password and we store that in plain text,
and even with the addition of encryption for credentials, that's stupid.
The access key, sure, but the access secret should have always been
hashed because a user should only even see that secret the first time
when we generate it just like on real AWS. I'll be honest I haven't
looked at how the auth works for ec2, I'd assume it could be changed to
hash and compare, but I could be wrong.

On 27/05/17 03:21, Lance Bragstad wrote:
> At the PTG in Atlanta, we talked about deprecating the policy and
> credential APIs. The policy API doesn't do anything and secrets
> shouldn't be stored in credential API. Reasoning and outcomes can be
> found in the etherpad from the session [0]. There was some progress
> made on the policy API [1], but it's missing a couple patches to
> tempest. Is anyone willing to carry the deprecation over the finish
> line for Pike?
>
> According to the outcomes from the session, the credential API needs a
> little bit of work before we can deprecate it. It was determined at
> the PTG that we if keystone absolutely has to store ec2 and totp
> secrets, they should be formal first-class attributes of the user
> (i.e. like how we treat passwords `user.password`). This would require
> refactoring the existing totp and ec2 implementations to use user
> attributes. Then we could move forward with deprecating the actual
> credential API. Depending on the amount of work required to make .totp
> and .ec2 formal user attributes, I'd be fine with pushing the
> deprecation to Queens if needed.
>
> Does this interest anyone?
>
>
> [0] https://etherpad.openstack.org/p/pike-ptg-keystone-deprecations
> [1] https://review.openstack.org/#/c/438096/
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] Passing along some field feedback

2017-05-26 Thread Ben Nemec

Hi,

As some of you may know, I have a weekly meeting with some of the Red 
Hat folks who are out in the field using TripleO to deploy OpenStack at 
customer sites.  The past couple of weeks have been particularly 
interesting, so I thought I'd take a moment to disseminate the comments 
to the broader developer community.  I'll try to keep it from getting 
too wall-of-texty, but no promises. :-)


Here goes:

* There was interest in using split stack to avoid having to use 
TripleO's baremetal provisioning.  I let them know that technically 
split stack was possible in Newton and that further usability 
improvements were coming in Ocata and Pike.  Keep in mind that for many 
of these companies Newton is still a new shiny thing that they're 
looking to move onto, so even though this functionality seems like it's 
been around for a long time to us that isn't necessarily the case in the 
field.


* Some concerns around managing config with TripleO were raised, 
specifically two points: 1) Stack updates can cause service outages, 
even if you're just looking to change one minor config value in a single 
service.  2) Stack updates take a long time.  45 minutes to apply one 
config value seems a little heavy from a user perspective.


For 1, I did mention that we had been working to reduce the number of 
spurious service restarts on update.  2 is obviously a little trickier. 
One thing that was requested was a mode where a Puppet step doesn't get 
run unless something changes - which is the exact opposite of the 
feedback we got initially on TripleO.  Back then people wanted a stack 
update to assert state every time, like regular Puppet would.  There 
probably weren't enough people in this discussion to take immediate 
action on this, but I thought it was an interesting potential change in 
philosophy.  Also it drives home the need for better performance of 
TripleO as a whole.


* For large scale deployments there was interest in leveraging the 
oslo.messaging split RPC/notifications.  Initially just separate Rabbit 
clusters, but potentially in the future completely different drivers 
that better suit the two different use cases.  Since this is something I 
believe operators are already using in the field, it would be good to 
get support for it in TripleO.  I suggested they file an RFE, and I went 
ahead and opened a wishlist bug upstream: 
https://bugs.launchpad.net/tripleo/+bug/1693928


And although I have a bunch more, I'm going to stop here in an attempt 
to avoid complete information overload.  I'll send another email (or 
two...) another day with some of the other topics that came up.


Thanks.

-Ben


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3][HA] 2 masters after reboot of node

2017-05-26 Thread Kevin Benton
I recommend a completely new RPC endpoint to trigger this behavior that the
L3 agent calls before sync routers. Don't try to add it to sync routers
which is already quite complex. :)

On Fri, May 26, 2017 at 7:53 AM, Anil Venkata 
wrote:

> Thanks Kevin, Agree with you. I will try to implement this suggestion.
>
> On Fri, May 26, 2017 at 7:01 PM, Kevin Benton  wrote:
>
>> Just triggering a status change should just be handled as a port update
>> on the agent side which shouldn't interrupt any existing flows. So an l3
>> agent reboot should be safe in this case.
>>
>> On May 26, 2017 6:06 AM, "Anil Venkata"  wrote:
>>
>>> On Fri, May 26, 2017 at 6:14 PM, Kevin Benton  wrote:
>>>
 Perhaps when the L3 agent starts up we can have it explicitly set the
 port status to DOWN for all of the HA ports on that node. Then we are
 guaranteed that when they go to ACTIVE it will be because the L2 agent has
 wired the ports.


>>> Thanks Kevin. Will it create dependency of dataplane on controlplane.
>>> For example, if the node is properly configured(l2 agent wired up,
>>> keepalived configured, VRRP exchange happening) but user just restarted
>>> only l3 agent, then with the suggestion we won't break l2
>>> connectivity(leading to multiple HA masters) by re configuring again?
>>>
>>> Or is there a way server can detect that node(not only agent) is down
>>> and set port status?
>>>
>>>
 On Fri, May 26, 2017 at 5:27 AM, Anil Venkata 
 wrote:

> This is regarding https://bugs.launchpad.net/neutron/+bug/1597461
> Earlier to fix this, we added code [1] to spawn keepalived only when
> HA network port status is active.
>
> But, on reboot, node will get HA network port's status as ACTIVE from
> server(please see comment [2]),
> though l2 agent might not have wired[3] the port, resulting in
> spawning  keepalived. Any suggestions
> how l3 agent can detect that l2 agent has not wired the port and
> then avoid spawning keepalived?
>
> [1] https://review.openstack.org/#/c/357458/
> [2] https://bugs.launchpad.net/neutron/+bug/1597461/comments/26
> [3] l2 agent wiring means setting up ovs flows on br-tun to make port
> usable
>
> Thanks
> Anilvenkata
>
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.op
> enstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.op
 enstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Anyone relying on the host_subset_size config option?

2017-05-26 Thread Ben Nemec



On 05/26/2017 12:17 PM, Edward Leafe wrote:

[resending to include the operators list]

The host_subset_size configuration option was added to the scheduler to help 
eliminate race conditions when two requests for a similar VM would be processed 
close together, since the scheduler’s algorithm would select the same host in 
both cases, leading to a race and a likely failure to build for the second 
request. By randomly choosing from the top N hosts, the likelihood of a race 
would be reduced, leading to fewer failed builds.

Current changes in the scheduling process now have the scheduler claiming the 
resources as soon as it selects a host. So in the case above with 2 similar 
requests close together, the first request will claim successfully, but the 
second will fail *while still in the scheduler*. Upon failing the claim, the 
scheduler will simply pick the next host in its weighed list until it finds one 
that it can claim the resources from. So the host_subset_size configuration 
option is no longer needed.

However, we have heard that some operators are relying on this option to help 
spread instances across their hosts, rather than using the RAM weigher. My 
question is: will removing this randomness from the scheduling process hurt any 
operators out there? Or can we safely remove that logic?


We used host_subset_size to schedule randomly in one of the TripleO CI 
clouds.  Essentially we had a heterogeneous set of hardware where the 
numerically larger (more RAM, more disk, equal CPU cores) systems were 
significantly slower.  This caused them to be preferred by the scheduler 
with a normal filter configuration, which is obviously not what we 
wanted.  I'm not sure if there's a smarter way to handle it, but setting 
host_subset_size to the number of compute nodes and disabling basically 
all of the weighers allowed us to equally distribute load so at least 
the slow nodes weren't preferred.


That said, we're migrating away from that frankencloud so I certainly 
wouldn't block any scheduler improvements on it.  I'm mostly chiming in 
to describe a possible use case.  And please feel free to point out if 
there's a better way to do this. :-)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [upgrades][skip-level][leapfrog] - RFC - Skipping releases when upgrading

2017-05-26 Thread David Moreau Simard
I've mentioned this elsewhere but writing here for posterity...

Making N to N+1 upgrades seamless and work well is already challenging
today which is one of the reasons why people aren't upgrading in the
first place.
Making N to N+1 upgrades work as well as possible already puts a great
strain on developers and resources, think about the testing and CI
involved in making sure things really work.

My opinion is that of upgrades were made out to be a simple, easy and
seamless operation, it wouldn't be that much of a problem to upgrade
from N to N+3 by upgrading from release to release (three times) until
you've caught up.
But then, if upgrades are awesome, maybe operators won't be lagging 3
releases behind anymore.


David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]


On Thu, May 25, 2017 at 9:55 PM, Carter, Kevin  wrote:
> Hello Stackers,
>
> As I'm sure many of you know there was a talk about doing "skip-level"[0]
> upgrades at the OpenStack Summit which quite a few folks were interested in.
> Today many of the interested parties got together and talked about doing
> more of this in a formalized capacity. Essentially we're looking for cloud
> upgrades with the possibility of skipping releases, ideally enabling an N+3
> upgrade. In our opinion it would go a very long way to solving cloud
> consumer and deployer problems it folks didn't have to deal with an upgrade
> every six months. While we talked about various issues and some of the
> current approaches being kicked around we wanted to field our general chat
> to the rest of the community and request input from folks that may have
> already fought such a beast. If you've taken on an adventure like this how
> did you approach it? Did it work? Any known issues, gotchas, or things folks
> should be generally aware of?
>
>
> During our chat today we generally landed on an in-place upgrade with known
> API service downtime and little (at least as little as possible) data plane
> downtime. The process discussed was basically:
> a1. Create utility "thing-a-me" (container, venv, etc) which contains the
> required code to run a service through all of the required upgrades.
> a2. Stop service(s).
> a3. Run migration(s)/upgrade(s) for all releases using the utility
> "thing-a-me".
> a4. Repeat for all services.
>
> b1. Once all required migrations are complete run a deployment using the
> target release.
> b2. Ensure all services are restarted.
> b3. Ensure cloud is functional.
> b4. profit!
>
> Obviously, there's a lot of hand waving here but such a process is being
> developed by the OpenStack-Ansible project[1]. Currently, the OSA tooling
> will allow deployers to upgrade from Juno/Kilo to Newton using Ubuntu 14.04.
> While this has worked in the lab, it's early in development (YMMV). Also,
> the tooling is not very general purpose or portable outside of OSA but it
> could serve as a guide or just a general talking point. Are there other
> tools out there that solve for the multi-release upgrade? Are there any
> folks that might want to share their expertise? Maybe a process outline that
> worked? Best practices? Do folks believe tools are the right way to solve
> this or would comprehensive upgrade documentation be better for the general
> community?
>
> As most of the upgrade issues center around database migrations, we
> discussed some of the potential pitfalls at length. One approach was to
> roll-up all DB migrations into a single repository and run all upgrades for
> a given project in one step. Another was to simply have mutliple python
> virtual environments and just run in-line migrations from a version specific
> venv (this is what the OSA tooling does). Does one way work better than the
> other? Any thoughts on how this could be better? Would having N+2/3
> migrations addressable within the projects, even if they're not tested any
> longer, be helpful?
>
> It was our general thought that folks would be interested in having the
> ability to skip releases so we'd like to hear from the community to validate
> our thinking. Additionally, we'd like to get more minds together and see if
> folks are wanting to work on such an initiative, even if this turns into
> nothing more than a co-op/channel where we can "phone a friend". Would it be
> good to try and secure some PTG space to work on this? Should we try and
> create working group going?
>
> If you've made it this far, please forgive my stream of consciousness. I'm
> trying to ask a lot of questions and distill long form conversation(s) into
> as little text as possible all without writing a novel. With that said, I
> hope this finds you well, I look forward to hearing from (and working with)
> you soon.
>
> [0] https://etherpad.openstack.org/p/BOS-forum-skip-level-upgrading
> [1]
> https://github.com/openstack/openstack-ansible-ops/tree/master/leap-upgrades
>
>
> --
>
> Kevin Carter
> IRC: Cloudnull
>
> 

[openstack-dev] [TripleO] overcloud_containers.yaml: container versioning, tags and use cases ?

2017-05-26 Thread David Moreau Simard
Hi,

Today we discussed a challenge around image tags that mostly boils
down to limitations in how overcloud_containers.yaml is constructed
and used.

TL;DR, we need a smart and easy way to work with the
overcloud_containers.yaml file (especially tags).

Let's highlight a few use cases that we need to work through:

#1. Building containers
  For building containers, all we really care is the name of the
images we need to build.
  Today, we install a trunk repository and then install
tripleo-common-containers from that trunk repository.
  We then mostly grep/sed/awk our way from overcloud_containers.yaml
to a clean list of images to build and then build those.
  Relatively okay with this but prone to things breaking -- a clean
way to get just the list of images out of there would be nice.

#2. Testing and promoting containers
  This comes right after use case #1 where we build containers in the pipeline.
  For those familiar with the CI pipeline to do promotions [1], this
would look a bit like this [2].

  In practice, this works basically the same way as we build, test and
promote tripleo-quickstart images.
  We pick a particular trunk repository hash and build containers for
that hash. These are then pushed with both the tags ":latest" and
":".
  We're then supposed to test those containers in the pipeline but to
do that, we need to be pulling from :, not :latest...
although they are in theory equivalent at that given time, this might
not always be true.
  So the testing job(s) need a way to easily customize/pull from that
particular hash instead of the hardcoded latest we have right now.

#3. Upstream gate jobs
  Ideally, gate jobs should never use ":latest". This is in trunk/dlrn
terms the equivalent of "/current/" or "/consistent/".
  They'd use something like ":latest-passed-ci" which would be the
proper equivalent of "/current-passed-ci/" or "/current-tripleo/".

  This brings an interesting challenge around how we currently add new
images to overcloud_containers.yaml (example [3]).
  It is expected that, when you add the image, the image is already
present on the registry because otherwise the container jobs will fail
since this new image cannot be pulled (example [4]).
  My understanding currently is that humans may build and push images
to the registry ahead of time so that this works.
  We can keep a similar approach if that's what we really want with
the new private registry, the job that builds container is made
generic exactly to be able to build just a specific set of image(s) if
we want.
  Here's the catch, though: this new container image will have the
":latest" tag, it will not have ":latest-passed-ci" because it hasn't
passed CI yet, it's being added just now.
  So how do we address this ?

  Note:
  We've already discussed that some containers need to pick up the
latest and the greatest from the "/current/" repository, either
because they are "direct" tripleo packages or if "Depends-On" is used.
  So far, the solution we seem to be going towards is to pick up the
containers from ":latest-passed-ci" and then more or less add a 'yum
update' layer to the images needing an update.
  This is the option that is in the best interest of time, we'd
otherwise be spending too much time building containers in jobs that
are already taking way too long to run.

#4. Test days
  When doing test days, we know to point testers to
/current-passed-ci/ as well as tested quickstart images.
  How can we make it easy for containers ? If the container list from
tripleo-common is hardcoded to latest, that won't work. If it's
hardcoded to :latest-passed-ci, it won't work for other use cases.
  Ideally this would be super easy for end users as well as developers
so that they can get started easily.

#5 Stable releases, users, operators & co
  The packaging workflow is not the same for trunk (dlrn out of git
source on trunk.rdoproject.oirg) and for stable releases (CentOS build
system out of released tarballs on mirror.centos.org).
  It's also going to be different for containers.
  For trunk, we'll be building containers with trunk repositories and
publishing them to a private registry analogous to
trunk.rdoproject.org repositories.
  For stable releases, while still hand-wavy and foggy, we seem to be
headed in the direction of the CentOS official registry which takes
Dockerfiles in pseudo dist-git repositories and builds/publishes the
containers through Jenkins jobs.
  This is sort of similar to how downstream would work if you replace
Jenkins by Brew/OSBS.

  So here, we want to use the overcloud_containers.yaml file to
"compile" Dockerfiles which will be shipped to git repositories and
then built by another process.
  These containers will be published somewhere that tripleo-common is
sort of expected to know ahead of time because users, customers and
developers need to be pulling from that "stable" source and from a
"stable" tag.

So... what do we do ?

[1]: 

Re: [openstack-dev] [requirements][tripleo][heat] Projects holding back requirements updates

2017-05-26 Thread Emilien Macchi
On Fri, May 26, 2017 at 11:22 AM, Steven Hardy  wrote:
> On Thu, May 25, 2017 at 03:15:13PM -0500, Ben Nemec wrote:
>> Tagging with tripleo and heat teams for the os-*-config projects.  I'm not
>> sure which owns them now, but it sounds like we need a new release.
>
> I think they're still owned by the TripleO team, but we missed them in the
> pike-1 release, I pushed this patch aiming to resolve that:

Indeed, we missed it, apologize for that.

> https://review.openstack.org/#/c/468292/

Thanks!

>
> Steve
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral][freezer] adopting oslo.context for logging debugging and tracing

2017-05-26 Thread Doug Hellmann
Excerpts from Saad Zaher's message of 2017-05-26 12:03:24 +0100:
> Hi Doug,
> 
> Thanks for your review. Actually freezer has a separate repo for the api,
> it can be found here [1]. Freezer is using oslo.context since newton. If
> you have the time you can take a look at it and let us know if you have any
> comments.

Ah, that explains why I couldn't find it in the freezer repo. :-)

Doug

> 
> Thanks for your help
> 
> [1] https://github.com/openstack/freezer-api
> 
> Best Regards,
> Saad!
> 
> On Fri, May 26, 2017 at 5:45 AM, Renat Akhmerov 
> wrote:
> 
> > Thanks Doug. We’ll look into this.
> >
> > @Tuan, is there any roadblocks with the patch you’re working on? [1]
> >
> > [1] https://review.openstack.org/#/c/455407/
> >
> >
> > Renat
> >
> > On 26 May 2017, 01:54 +0700, Doug Hellmann , wrote:
> >
> > The new work to add the exception information and request ID tracing
> > depends on using both oslo.context and oslo.log to have all of the
> > relevant pieces of information available as log messages are emitted.
> >
> > In the course of reviewing the "done" status for those initiatives,
> > I noticed that although mistral and freezer are using oslo.log,
> > neither uses oslo.context. That means neither project will get the
> > extra debugging information, and neither project will see the global
> > request ID in logs.
> >
> > I started looking at updating mistral's context to use oslo.context
> > as a base class, but ran into some issues because of some extensions
> > made to the existing class. I wasn't able to find where freezer is
> > doing anything at all with an API request context.
> >
> > I'm available to help, if someone else wants to pick up the work.
> >
> > Doug
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler] Anyone relying on the host_subset_size config option?

2017-05-26 Thread Jay Pipes

On 05/26/2017 01:14 PM, Edward Leafe wrote:

The host_subset_size configuration option was added to the scheduler to help 
eliminate race conditions when two requests for a similar VM would be processed 
close together, since the scheduler’s algorithm would select the same host in 
both cases, leading to a race and a likely failure to build for the second 
request. By randomly choosing from the top N hosts, the likelihood of a race 
would be reduced, leading to fewer failed builds.

Current changes in the scheduling process now have the scheduler claiming the 
resources as soon as it selects a host. So in the case above with 2 similar 
requests close together, the first request will claim successfully, but the 
second will fail *while still in the scheduler*. Upon failing the claim, the 
scheduler will simply pick the next host in its weighed list until it finds one 
that it can claim the resources from. So the host_subset_size configuration 
option is no longer needed.

However, we have heard that some operators are relying on this option to help 
spread instances across their hosts, rather than using the RAM weigher. My 
question is: will removing this randomness from the scheduling process hurt any 
operators out there? Or can we safely remove that logic?


Actually, I don't believe this should be removed. The randomness that is 
injected into the placement decision using this configuration setting is 
useful for reducing contention even in the scheduler claim process.


When benchmarking claims in the scheduler here:

https://github.com/jaypipes/placement-bench

I found that the use of a "partitioning strategy" resulted in dramatic 
reduction in lock contention in the claim process. The modulo and random 
partitioning strategies both seemed to work pretty well for reducing 
lock retries.


So, in short, I'd say keep it.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [panko] dropping hbase driver support

2017-05-26 Thread gordon chung
hi,

as all of you know, we moved all storage out of ceilometer so it is 
handles only data generation and normalisation. there seems to be very 
little contribution to panko which handles metadata indexing, event 
storage so given how little it's being adopted and how little resources 
are being put on supporting it, i'd like to proposed to drop hbase 
support as a first step in making the project more manageable for 
whatever resource chooses to support it.

why hbase as initial candidate to prune?
- it has no testing in gate
- it never had testing in gate
- we didn't receive a single reply in user survey saying hbase was used
- all the devs who originally worked on driver don't work on openstack 
anymore.
- i'd be surprised if it actually worked

i just realised it's a long weekend in some places so i'll let this 
linger for a bit.

cheers,
-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][scheduler] Anyone relying on the host_subset_size config option?

2017-05-26 Thread Edward Leafe
[resending to include the operators list]

The host_subset_size configuration option was added to the scheduler to help 
eliminate race conditions when two requests for a similar VM would be processed 
close together, since the scheduler’s algorithm would select the same host in 
both cases, leading to a race and a likely failure to build for the second 
request. By randomly choosing from the top N hosts, the likelihood of a race 
would be reduced, leading to fewer failed builds.

Current changes in the scheduling process now have the scheduler claiming the 
resources as soon as it selects a host. So in the case above with 2 similar 
requests close together, the first request will claim successfully, but the 
second will fail *while still in the scheduler*. Upon failing the claim, the 
scheduler will simply pick the next host in its weighed list until it finds one 
that it can claim the resources from. So the host_subset_size configuration 
option is no longer needed.

However, we have heard that some operators are relying on this option to help 
spread instances across their hosts, rather than using the RAM weigher. My 
question is: will removing this randomness from the scheduling process hurt any 
operators out there? Or can we safely remove that logic?


-- Ed Leafe


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][scheduler] Anyone relying on the host_subset_size config option?

2017-05-26 Thread Edward Leafe
The host_subset_size configuration option was added to the scheduler to help 
eliminate race conditions when two requests for a similar VM would be processed 
close together, since the scheduler’s algorithm would select the same host in 
both cases, leading to a race and a likely failure to build for the second 
request. By randomly choosing from the top N hosts, the likelihood of a race 
would be reduced, leading to fewer failed builds.

Current changes in the scheduling process now have the scheduler claiming the 
resources as soon as it selects a host. So in the case above with 2 similar 
requests close together, the first request will claim successfully, but the 
second will fail *while still in the scheduler*. Upon failing the claim, the 
scheduler will simply pick the next host in its weighed list until it finds one 
that it can claim the resources from. So the host_subset_size configuration 
option is no longer needed.

However, we have heard that some operators are relying on this option to help 
spread instances across their hosts, rather than using the RAM weigher. My 
question is: will removing this randomness from the scheduling process hurt any 
operators out there? Or can we safely remove that logic?


-- Ed Leafe


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][barbican] Why is Cinder creating symmetric keys in Barbican for use with encrypted volumes?

2017-05-26 Thread Lee Yarwood
On 26-05-17 17:25:15, Duncan Thomas wrote:
> On 25 May 2017 12:33 pm, "Lee Yarwood"  wrote:
> 
> On 25-05-17 11:38:44, Duncan Thomas wrote:
> > On 25 May 2017 at 11:00, Lee Yarwood  wrote:
> > > This has also reminded me that the plain (dm-crypt) format really needs
> > > to be deprecated this cycle. I posted to the dev and ops ML [2] last
> > > year about this but received no feedback. Assuming there are no last
> > > minute objections I'm going to move forward with deprecating this format
> > > in os-brick this cycle.
> >
> > What is the reasoning for this? There are plenty of people using it, and
> > you're going to break them going forward if you remove it.
> 
> I didn't receive any feedback indicating that we had any users of plain
> when I initially posted to the ML. That said there obviously can be
> users out there and my intention isn't to pull support for this format
> immediately without any migration path to LUKS etc.
> 
> 
> Ok, after a few emails, of the users I knew about, one is happy with luks
> and the others are no longer running openstack. Apologies for the mis-steer

No problem, thanks for replying!

Lee
-- 
Lee Yarwood A5D1 9385 88CB 7E5F BE64  6618 BCA6 6E33 F672 2D76

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [upgrades][skip-level][leapfrog] - RFC - Skipping releases when upgrading

2017-05-26 Thread Dan Smith
> I haven't looked at what Keystone is doing, but to the degree they are
> using triggers, those triggers would only impact new data operations as
> they continue to run into the schema that is straddling between two
> versions (e.g. old column/table still exists, data should be synced to
> new column/table).   If they are actually running a stored procedure to
> migrate existing data (which would be surprising to me...) then I'd
> assume that invokes just like any other "ALTER TABLE" instruction in
> their migrations.  If those operations themselves rely on the triggers,
> that's fine.

I haven't looked closely either, but I thought the point _was_ to
transform data. If they are, and you run through a bunch of migrations
where you end at a spot that expects that data was migrated while
running at step 3, triggers dropped at step 7, and then schema compacted
at step 11, then just blowing through them could be a problem. It'd work
for a greenfield install no problem because there was nothing to
migrate, but real people would trip over it.

> But a keystone person to chime in would be much better than me just
> making stuff up.

Yeah, same :)

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][barbican] Why is Cinder creating symmetric keys in Barbican for use with encrypted volumes?

2017-05-26 Thread Duncan Thomas
On 25 May 2017 12:33 pm, "Lee Yarwood"  wrote:

On 25-05-17 11:38:44, Duncan Thomas wrote:
> On 25 May 2017 at 11:00, Lee Yarwood  wrote:
> > This has also reminded me that the plain (dm-crypt) format really needs
> > to be deprecated this cycle. I posted to the dev and ops ML [2] last
> > year about this but received no feedback. Assuming there are no last
> > minute objections I'm going to move forward with deprecating this format
> > in os-brick this cycle.
>
> What is the reasoning for this? There are plenty of people using it, and
> you're going to break them going forward if you remove it.

I didn't receive any feedback indicating that we had any users of plain
when I initially posted to the ML. That said there obviously can be
users out there and my intention isn't to pull support for this format
immediately without any migration path to LUKS etc.


Ok, after a few emails, of the users I knew about, one is happy with luks
and the others are no longer running openstack. Apologies for the mis-steer
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder][barbican] Why is Cinder creating symmetric keys in Barbican for use with encrypted volumes?

2017-05-26 Thread Lee Yarwood
On 25-05-17 11:00:26, Lee Yarwood wrote:
> Hello all,
> 
> I'm currently working on enabling QEMU's native LUKS support within Nova
> [1]. While testing this work with Barbican I noticed that Cinder is
> creating symmetric keys for use with encrypted volumes :
> 
> https://github.com/openstack/cinder/blob/63433278a485b65ae6ed1998e7bc83933ceee167/cinder/volume/flows/api/create_volume.py#L385
> https://github.com/openstack/castellan/blob/64207e303529b7fceb3b8b0f0a65f8f49b3f9b26/castellan/key_manager/barbican_key_manager.py#L206
> 
> However the only supported disk encryption formats on the front-end at
> present are plain (dm-crypt) and LUKS, neither of which use the supplied
> key to directly encrypt or decrypt data. Plain derives a fixed length
> master key from the provided key / passphrase and LUKS uses PBKDF2 to
> derive a key from the key / passphrase that unlocks a separate master
> key.
> 
> https://gitlab.com/cryptsetup/cryptsetup/wikis/FrequentlyAskedQuestions
> - 2.4 What is the difference between "plain" and LUKS format?
> 
> I also can't find any evidence of these keys being used directly on the
> backend for any direct encryption of volumes within c-vol. Happy to be
> corrected here if there are out-of-tree drivers etc that do this.
> 
> IMHO for now we are better off storing a secret passphrase in Barbican
> for use with these encrypted volumes, would there be any objections to
> this? Are there actual plans to use a symmetric key stored in Barbican
> to directly encrypt and decrypt volumes?

I've documented this as a cinder bug below, still happy to discuss this
here on the ML if anyone from Cinder or Barbican disagrees with my
suggestion of passphrases over symmetric keys :

Cinder creating and associating symmetric keys with encrypted volumes when used 
with Barbican
https://bugs.launchpad.net/cinder/+bug/1693840

Thanks again,

Lee

> [1] 
> https://specs.openstack.org/openstack/nova-specs/specs/pike/approved/libvirt-qemu-native-luks.html
> [2] 
> http://lists.openstack.org/pipermail/openstack-dev/2016-November/106956.html
-- 
Lee Yarwood A5D1 9385 88CB 7E5F BE64  6618 BCA6 6E33 F672 2D76

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [upgrades][skip-level][leapfrog] - RFC - Skipping releases when upgrading

2017-05-26 Thread Mike Bayer



On 05/26/2017 10:56 AM, Dan Smith wrote:

As most of the upgrade issues center around database migrations, we
discussed some of the potential pitfalls at length. One approach was to
roll-up all DB migrations into a single repository and run all upgrades
for a given project in one step. Another was to simply have mutliple
python virtual environments and just run in-line migrations from a
version specific venv (this is what the OSA tooling does). Does one way
work better than the other? Any thoughts on how this could be better?


IMHO, and speaking from a Nova perspective, I think that maintaining a
separate repo of migrations is a bad idea. We occasionally have to fix a
migration to handle a case where someone is stuck and can't move past a
certain revision due to some situation that was not originally
understood. If you have a separate copy of our migrations, you wouldn't
get those fixes. Nova hasn't compacted migrations in a while anyway, so
there's not a whole lot of value there I think.



+1 I think it's very important that migration logic not be duplicated. 
Nova's (and everyone else's) migration files have the information on how 
to move between specific schema versions.Any concatenation of these 
into an effective "N+X" migration should be on the fly as much as is 
possible.





The other thing to consider is that our _schema_ migrations often
require _data_ migrations to complete before moving on. That means you
really have to move to some milestone version of the schema, then
move/transform data, and then move to the next milestone. Since we
manage those according to releases, those are the milestones that are
most likely to be successful if you're stepping through things.

I do think that the idea of being able to generate a small utility
container (using the broad sense of the word) from each release, and
using those to step through N, N+1, N+2 to arrive at N+3 makes the most
sense.


+1




Nova has offline tooling to push our data migrations (even though the
command is intended to be runnable online). The concern I would have
would be over how to push Keystone's migrations mechanically, since I
believe they moved forward with their proposal to do data migrations in
stored procedures with triggers. Presumably there is a need for
something similar to nova's online-data-migrations command which will
trip all the triggers and provide a green light for moving on?


I haven't looked at what Keystone is doing, but to the degree they are 
using triggers, those triggers would only impact new data operations as 
they continue to run into the schema that is straddling between two 
versions (e.g. old column/table still exists, data should be synced to 
new column/table).   If they are actually running a stored procedure to 
migrate existing data (which would be surprising to me...) then I'd 
assume that invokes just like any other "ALTER TABLE" instruction in 
their migrations.  If those operations themselves rely on the triggers, 
that's fine.


But a keystone person to chime in would be much better than me just 
making stuff up.










In the end, projects support N->N+1 today, so if you're just stepping
through actual 1-version gaps, you should be able to do as many of those
as you want and still be running "supported" transitions. There's a lot
of value in that, IMHO.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] quickstart featureset and reviews

2017-05-26 Thread Gabriele Cerami
Hi,

as discussed during the previous meeting, we are adding a lot of new
jobs that need new featurset files recently. It's difficult to scan all
the review to find an unused featureset file index, so there have been a
lot of conflicts.
To avoid this I created an etherpad at

https://etherpad.openstack.org/p/quickstart-featuresets

for coordination, so any person that requires a new featureset file to
be added, can easily find an unused index and reserve it for their
reviews.

Also, since this is a period of transitions, and there also have been
some confusion around use of featureset files, and variables
ditribution, I would also ask the people that want to +2 the reviews in
quickstart, quickstart-extras and tripleo-ci project, to attend at least
the weekly Tripleo CI squad meeting, every thursday at 14, so they can
be up-to-date on the latest evolutions.

Thanks.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][tripleo][heat] Projects holding back requirements updates

2017-05-26 Thread Ben Nemec



On 05/26/2017 04:22 AM, Steven Hardy wrote:

On Thu, May 25, 2017 at 03:15:13PM -0500, Ben Nemec wrote:

Tagging with tripleo and heat teams for the os-*-config projects.  I'm not
sure which owns them now, but it sounds like we need a new release.


I think they're still owned by the TripleO team, but we missed them in the
pike-1 release, I pushed this patch aiming to resolve that:

https://review.openstack.org/#/c/468292/


Thanks, Steve.  Looks like Jenkins isn't happy with it though. :-/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Status update, May 26

2017-05-26 Thread Sean McGinnis

Thanks Jimmy!


On 05/26/2017 10:15 AM, Jimmy McArthur wrote:
This was completely my fault. I inadvertently updated it on our dev 
server, but not production :\


https://www.openstack.org/foundation/tech-committee/

Should be good to go now. My apologies for the delay!!

Jimmy


Sean McGinnis 
May 26, 2017 at 10:08 AM


Hmm, maybe just not pushed up yet?

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Davanum Srinivas 
May 26, 2017 at 10:02 AM
Fun! it was marked as fixed :)
https://bugs.launchpad.net/openstack-org/+bug/1691175



Sean McGinnis 
May 26, 2017 at 9:51 AM


We also need to refresh this page:

https://www.openstack.org/foundation/tech-committee/

I think this came up at one other point, but I don't recall if there
was a way to propose a patch to update that.

Sean

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-05-26 Thread Sean Dague
On 05/26/2017 10:44 AM, Lance Bragstad wrote:

> Interesting - I guess the way I was thinking about it was on a per-token
> basis, since today you can't have a single token represent multiple
> scopes. Would it be unreasonable to have oslo.context build this
> information based on multiple tokens from the same user, or is that a
> bad idea?

No service consumer is interacting with Tokens. That's all been
abstracted away. The code in the consumers consumer is interested in is
the context representation.

Which is good, because then the important parts are figuring out the
right context interface to consume. And the right Keystone front end to
be explicit about what was intended by the operator "make jane an admin
on compute in region 1".

And the middle can be whatever works best on the Keystone side. As long
as the details of that aren't leaked out, it can also be refactored in
the future by having keystonemiddleware+oslo.context translate to the
known interface.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] deprecating the policy and credential APIs

2017-05-26 Thread Lance Bragstad
At the PTG in Atlanta, we talked about deprecating the policy and
credential APIs. The policy API doesn't do anything and secrets shouldn't
be stored in credential API. Reasoning and outcomes can be found in the
etherpad from the session [0]. There was some progress made on the policy
API [1], but it's missing a couple patches to tempest. Is anyone willing to
carry the deprecation over the finish line for Pike?

According to the outcomes from the session, the credential API needs a
little bit of work before we can deprecate it. It was determined at the PTG
that we if keystone absolutely has to store ec2 and totp secrets, they
should be formal first-class attributes of the user (i.e. like how we treat
passwords `user.password`). This would require refactoring the existing
totp and ec2 implementations to use user attributes. Then we could move
forward with deprecating the actual credential API. Depending on the amount
of work required to make .totp and .ec2 formal user attributes, I'd be fine
with pushing the deprecation to Queens if needed.

Does this interest anyone?


[0] https://etherpad.openstack.org/p/pike-ptg-keystone-deprecations
[1] https://review.openstack.org/#/c/438096/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Status update, May 26

2017-05-26 Thread Jimmy McArthur
This was completely my fault. I inadvertently updated it on our dev 
server, but not production :\


https://www.openstack.org/foundation/tech-committee/

Should be good to go now. My apologies for the delay!!

Jimmy


Sean McGinnis 
May 26, 2017 at 10:08 AM


Hmm, maybe just not pushed up yet?

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Davanum Srinivas 
May 26, 2017 at 10:02 AM
Fun! it was marked as fixed :)
https://bugs.launchpad.net/openstack-org/+bug/1691175



Sean McGinnis 
May 26, 2017 at 9:51 AM


We also need to refresh this page:

https://www.openstack.org/foundation/tech-committee/

I think this came up at one other point, but I don't recall if there
was a way to propose a patch to update that.

Sean

__ 


OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Status update, May 26

2017-05-26 Thread Sean McGinnis

On 05/26/2017 10:02 AM, Davanum Srinivas wrote:

Fun! it was marked as fixed :)
https://bugs.launchpad.net/openstack-org/+bug/1691175



Hmm, maybe just not pushed up yet?

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Status update, May 26

2017-05-26 Thread Davanum Srinivas
Fun! it was marked as fixed :)
https://bugs.launchpad.net/openstack-org/+bug/1691175

On Fri, May 26, 2017 at 10:51 AM, Sean McGinnis  wrote:
> On 05/26/2017 03:47 AM, Thierry Carrez wrote:
>>
>>
>> ttx to refresh 2017 contributor attrition stats and report back.
>>
>
> We also need to refresh this page:
>
> https://www.openstack.org/foundation/tech-committee/
>
> I think this came up at one other point, but I don't recall if there
> was a way to propose a patch to update that.
>
> Sean
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] CI Squad Meeting Summary (week 21) - Devmode OVB, RDO Cloud and config management

2017-05-26 Thread Attila Darazs
If the topics below interest you and you want to contribute to the 
discussion, feel free to join the next meeting:


Time: Thursdays, 14:30-15:30 UTC
Place: https://bluejeans.com/4113567798/

Full minutes: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting

= Periodic & Promotion OVB jobs Quickstart transition =

We had a lively technical discussions this week. Gabriele's work on 
transitioning the periodic & promotion jobs is nearly complete, only 
needs reviews at this point. We won't set a transition date for these as 
it is not really impacting folks long term if these jobs are failing for 
a few days at this point. We'll transition when everything is ready.


= RDO Cloud & Devmode OVB =

We continued planning the introduction of RDO Cloud for the upstream OVB 
jobs. We're still at the point of account setup.


The new OVB based devmode seems to be working fine. If you have access 
to RDO Cloud, and haven't tried it already, give it a go. It can set up 
a full master branch based deployment within 2 hours, including any 
pending changes baked into the under & overcloud.


When you have your account info sourced, all it takes is

$ ./devmode.sh --ovb

from your tripleo-quickstart repo! See here[1] for more info.

= Container jobs on nodepool multinode =

Gabriele is stuck with these new Quickstart jobs. We would need a deep 
dive into debugging and using the container based TripleO deployments. 
Let us know if you can do one!


= How to handle Quickstart configuration =

This a never-ending topic, on which we managed to spend a good chunk of 
time this week as well. Where should we put various configs? Should we 
duplicate a bunch of variables or cut them into small files?


For now it seems we can agree on 3 levels of configuration:

* nodes config (i.e. how many nodes we want for the deployment)
* envionment + provisioner settings (i.e. you want to run on rdocloud 
with ovb, or on a local machine with libvirt)
* featureset (a certain set of features enabled/disabled for the jobs, 
like pacemaker and ssl)


This seems rather straightforward until we encounter exceptions. We're 
going to figure out the edge cases and rework the current configs to 
stick to the rules.



That's it for this week. Thank you for reading the summary.

Best regards,
Attila

[1] http://docs.openstack.org/developer/tripleo-quickstart/devmode-ovb.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [upgrades][skip-level][leapfrog] - RFC - Skipping releases when upgrading

2017-05-26 Thread Dan Smith
> As most of the upgrade issues center around database migrations, we
> discussed some of the potential pitfalls at length. One approach was to
> roll-up all DB migrations into a single repository and run all upgrades
> for a given project in one step. Another was to simply have mutliple
> python virtual environments and just run in-line migrations from a
> version specific venv (this is what the OSA tooling does). Does one way
> work better than the other? Any thoughts on how this could be better?

IMHO, and speaking from a Nova perspective, I think that maintaining a
separate repo of migrations is a bad idea. We occasionally have to fix a
migration to handle a case where someone is stuck and can't move past a
certain revision due to some situation that was not originally
understood. If you have a separate copy of our migrations, you wouldn't
get those fixes. Nova hasn't compacted migrations in a while anyway, so
there's not a whole lot of value there I think.

The other thing to consider is that our _schema_ migrations often
require _data_ migrations to complete before moving on. That means you
really have to move to some milestone version of the schema, then
move/transform data, and then move to the next milestone. Since we
manage those according to releases, those are the milestones that are
most likely to be successful if you're stepping through things.

I do think that the idea of being able to generate a small utility
container (using the broad sense of the word) from each release, and
using those to step through N, N+1, N+2 to arrive at N+3 makes the most
sense.

Nova has offline tooling to push our data migrations (even though the
command is intended to be runnable online). The concern I would have
would be over how to push Keystone's migrations mechanically, since I
believe they moved forward with their proposal to do data migrations in
stored procedures with triggers. Presumably there is a need for
something similar to nova's online-data-migrations command which will
trip all the triggers and provide a green light for moving on?

In the end, projects support N->N+1 today, so if you're just stepping
through actual 1-version gaps, you should be able to do as many of those
as you want and still be running "supported" transitions. There's a lot
of value in that, IMHO.

--Dan

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3][HA] 2 masters after reboot of node

2017-05-26 Thread Anil Venkata
Thanks Kevin, Agree with you. I will try to implement this suggestion.

On Fri, May 26, 2017 at 7:01 PM, Kevin Benton  wrote:

> Just triggering a status change should just be handled as a port update on
> the agent side which shouldn't interrupt any existing flows. So an l3 agent
> reboot should be safe in this case.
>
> On May 26, 2017 6:06 AM, "Anil Venkata"  wrote:
>
>> On Fri, May 26, 2017 at 6:14 PM, Kevin Benton  wrote:
>>
>>> Perhaps when the L3 agent starts up we can have it explicitly set the
>>> port status to DOWN for all of the HA ports on that node. Then we are
>>> guaranteed that when they go to ACTIVE it will be because the L2 agent has
>>> wired the ports.
>>>
>>>
>> Thanks Kevin. Will it create dependency of dataplane on controlplane. For
>> example, if the node is properly configured(l2 agent wired up, keepalived
>> configured, VRRP exchange happening) but user just restarted only l3 agent,
>> then with the suggestion we won't break l2 connectivity(leading to multiple
>> HA masters) by re configuring again?
>>
>> Or is there a way server can detect that node(not only agent) is down and
>> set port status?
>>
>>
>>> On Fri, May 26, 2017 at 5:27 AM, Anil Venkata 
>>> wrote:
>>>
 This is regarding https://bugs.launchpad.net/neutron/+bug/1597461
 Earlier to fix this, we added code [1] to spawn keepalived only when HA
 network port status is active.

 But, on reboot, node will get HA network port's status as ACTIVE from
 server(please see comment [2]),
 though l2 agent might not have wired[3] the port, resulting in spawning
  keepalived. Any suggestions
 how l3 agent can detect that l2 agent has not wired the port and
 then avoid spawning keepalived?

 [1] https://review.openstack.org/#/c/357458/
 [2] https://bugs.launchpad.net/neutron/+bug/1597461/comments/26
 [3] l2 agent wiring means setting up ovs flows on br-tun to make port
 usable

 Thanks
 Anilvenkata

 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.op
 enstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Status update, May 26

2017-05-26 Thread Sean McGinnis

On 05/26/2017 03:47 AM, Thierry Carrez wrote:


ttx to refresh 2017 contributor attrition stats and report back.



We also need to refresh this page:

https://www.openstack.org/foundation/tech-committee/

I think this came up at one other point, but I don't recall if there
was a way to propose a patch to update that.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-05-26 Thread Lance Bragstad
On Fri, May 26, 2017 at 9:31 AM, Sean Dague  wrote:

> On 05/26/2017 10:05 AM, Lance Bragstad wrote:
> >
> >
> > On Fri, May 26, 2017 at 5:32 AM, Sean Dague  > > wrote:
> >
> > On 05/26/2017 03:44 AM, John Garbutt wrote:
> > > +1 on not forcing Operators to transition to something new twice,
> even
> > > if we did go for option 3.
> > >
> > > Do we have an agreed non-distruptive upgrade path mapped out yet?
> (For
> > > any of the options) We spoke about fallback rules you pass but
> with a
> > > warning to give us a smoother transition. I think that's my main
> > > objection with the existing patches, having to tell all admins to
> get
> > > their token for a different project, and give them roles in that
> > > project, all before being able to upgrade.
> >
> > I definitely think the double migration is a good reason to just do
> this
> > thing right the first time.
> >
> > My biggest real concern with is_admin_project (and the service
> project),
> > is that it's not very explicit. It's mostly a way to trick the
> current
> > plumbing into acting a different way. Which is fine if you are a
> > deployer and need to create this behavior with the existing codebase
> you
> > have. Which seems to have all come down to their being a
> > misunderstanding of what Roles were back in 2012, and the two camps
> went
> > off in different directions (roles really being project scoped, and
> > roles meaning global).
> >
> > It would be really great if the inflated context we got was "roles:
> x,
> > y, z, project_roles: q, r, s" (and fully accepting keystonemiddleware
> > and oslo.context might be weaving some magic there). I honestly think
> > that until we've got a very clear separation at that level, it's
> going
> > to be really tough to get policy files in projects to be any more
> > sensible in their defaults. Leaking is_admin_project all the way
> through
> > to a service and having them have to consider it for their policy
> (which
> > we do with the context today) definitely feels like a layer
> violation.
> >
> >
> > This is another good point. If we can ensure projects rely on
> > oslo.context to get scope information in a canonical form (like
> > context.scope == 'global' or context.scope == 'project') that might make
> > consuming all this easier. But it does require us to ensure oslo.context
> > does the right thing with various token types. I included some of that
> > information in the spec [0] but I didn't go into great detail. I thought
> > about adding it to the keystone spec but wasn't sure if that would be
> > the right place for it.
> >
> > [0] https://review.openstack.org/#/c/464763
>
> Personally, as someone that has to think about consuming oslo.context, I
> really don't want
> "scope" as a context option. Because now it means role means something
> different.
>
> I want the context to say:
>
> {
>"user": "me!"
>"project": "some_fun_work",
>"project_roles": ["member"],
>"is_admin": True,
>"roles": ["admin", "auditor"],
>
> }
>
> That's something I can imagine understanding. Because context switching
> on scope and conditionally doing different things in code depending on
> that is something that's going to cause bugs. It's hard code to not get
> wrong.
>
>
Interesting - I guess the way I was thinking about it was on a per-token
basis, since today you can't have a single token represent multiple scopes.
Would it be unreasonable to have oslo.context build this information based
on multiple tokens from the same user, or is that a bad idea?


> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-05-26 Thread Sean Dague
On 05/26/2017 10:05 AM, Lance Bragstad wrote:
> 
> 
> On Fri, May 26, 2017 at 5:32 AM, Sean Dague  > wrote:
> 
> On 05/26/2017 03:44 AM, John Garbutt wrote:
> > +1 on not forcing Operators to transition to something new twice, even
> > if we did go for option 3.
> >
> > Do we have an agreed non-distruptive upgrade path mapped out yet? (For
> > any of the options) We spoke about fallback rules you pass but with a
> > warning to give us a smoother transition. I think that's my main
> > objection with the existing patches, having to tell all admins to get
> > their token for a different project, and give them roles in that
> > project, all before being able to upgrade.
> 
> I definitely think the double migration is a good reason to just do this
> thing right the first time.
> 
> My biggest real concern with is_admin_project (and the service project),
> is that it's not very explicit. It's mostly a way to trick the current
> plumbing into acting a different way. Which is fine if you are a
> deployer and need to create this behavior with the existing codebase you
> have. Which seems to have all come down to their being a
> misunderstanding of what Roles were back in 2012, and the two camps went
> off in different directions (roles really being project scoped, and
> roles meaning global).
> 
> It would be really great if the inflated context we got was "roles: x,
> y, z, project_roles: q, r, s" (and fully accepting keystonemiddleware
> and oslo.context might be weaving some magic there). I honestly think
> that until we've got a very clear separation at that level, it's going
> to be really tough to get policy files in projects to be any more
> sensible in their defaults. Leaking is_admin_project all the way through
> to a service and having them have to consider it for their policy (which
> we do with the context today) definitely feels like a layer violation.
> 
> 
> This is another good point. If we can ensure projects rely on
> oslo.context to get scope information in a canonical form (like
> context.scope == 'global' or context.scope == 'project') that might make
> consuming all this easier. But it does require us to ensure oslo.context
> does the right thing with various token types. I included some of that
> information in the spec [0] but I didn't go into great detail. I thought
> about adding it to the keystone spec but wasn't sure if that would be
> the right place for it.
> 
> [0] https://review.openstack.org/#/c/464763

Personally, as someone that has to think about consuming oslo.context, I
really don't want
"scope" as a context option. Because now it means role means something
different.

I want the context to say:

{
   "user": "me!"
   "project": "some_fun_work",
   "project_roles": ["member"],
   "is_admin": True,
   "roles": ["admin", "auditor"],
   
}

That's something I can imagine understanding. Because context switching
on scope and conditionally doing different things in code depending on
that is something that's going to cause bugs. It's hard code to not get
wrong.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is the pendulum swinging on PaaS layers?

2017-05-26 Thread Zane Bitter

On 25/05/17 18:34, Matt Riedemann wrote:

On 5/22/2017 11:01 AM, Zane Bitter wrote:

If the user does a stack update that changes the network from 'auto'
to 'none', or vice-versa.


OK I guess we should make this a side discussion at some point, or hit
me up in IRC, but if you're requesting networks='none' with microversion

= 2.37 then nova should not allocate any networking, it should not

event attempt to do so.

Maybe the issue is the server is created with networks='auto' and has a
port, and then when you 'update the stack' it doesn't delete that server
and create a new one, but it tries to do something with the same server,


Yes, exactly. There are circumstances where Heat will replace a server 
because of a change in the configuration, but we want to have as few as 
possible of them and this is not one.



and in this case you'd have to detach the port(s) that were previously
created?


Yep, although this part is not that much different from what we had to 
do already when ports/networks change. The new part is handling the case 
where the user updates the network from 'none' -> 'auto'.



I don't know how Heat works, but if that's the case, then yeah that
doesn't sound fun, but I think Nova provides the APIs to be able to do
this.


Yep, it's all possible, since Nova talks to Neutron over a public API. 
Here is the implementation in Heat:


https://review.openstack.org/#/c/407328/16/heat/engine/resources/openstack/nova/server_network_mixin.py

The downside is that (in the update case) Heat has to call Neutron's 
get_auto_allocated_topology() itself rather than let Nova do it, so we 
now have some amount of duplicated logic that has to be kept in sync if 
anything ever changes in Nova/Neutron. It's definitely not the end of 
the world, but it's not entirely ideal either.


cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.messaging] Call to deprecate the 'pika' driver in the oslo.messaging project

2017-05-26 Thread Ken Giusti
So it's been over a week with no objections.

I will start the deprecation process, including an announcement on the
operator's email list.

Thanks for the feedback.

On Mon, May 22, 2017 at 8:06 PM, ChangBo Guo  wrote:
> +1 , let's focus on key drivers.
>
> 2017-05-17 2:02 GMT+08:00 Joshua Harlow :
>>
>> Fine with me,
>>
>> I'd personally rather get down to say 2 'great' drivers for RPC,
>>
>> And say 1 (or 2?) for notifications.
>>
>> So ya, wfm.
>>
>> -Josh
>>
>>
>> Mehdi Abaakouk wrote:
>>>
>>> +1 too, I haven't seen its contributors since a while.
>>>
>>> On Mon, May 15, 2017 at 09:42:00PM -0400, Flavio Percoco wrote:

 On 15/05/17 15:29 -0500, Ben Nemec wrote:
>
>
>
> On 05/15/2017 01:55 PM, Doug Hellmann wrote:
>>
>> Excerpts from Davanum Srinivas (dims)'s message of 2017-05-15
>> 14:27:36 -0400:
>>>
>>> On Mon, May 15, 2017 at 2:08 PM, Ken Giusti 
>>> wrote:

 Folks,

 It was decided at the oslo.messaging forum at summit that the pika
 driver will be marked as deprecated [1] for removal.
>>>
>>>
>>> [dims} +1 from me.
>>
>>
>> +1
>
>
> Also +1


 +1

 Flavio

 --
 @flaper87
 Flavio Percoco
>>>
>>>
>>>
>>>

 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> ChangBo Guo(gcb)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Ken Giusti  (kgiu...@gmail.com)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][tc][infra][security][stable] Proposal for shipping binaries and containers

2017-05-26 Thread Rob C
I've been out on vacation but as a circle back to normal (working!) life
I've found this thread very interesting.

I share the concerns raised about the level of resource required to back
this. I don't speak for the VMT but I agree with Jeremy that it should be
possible to provide VMT support to Kolla and their code base without
extending to the external libraries in their contain images. However, I'm
not sure that the limits of VMT coverage would be acceptable to downstream
stakeholders, for the reasons Doug has highlighted above.

Perhaps it would be useful for a spec around this, so we could collaborate
in a more structured way?

-Rob


On Wed, May 24, 2017 at 3:54 PM, Jeremy Stanley  wrote:

> On 2017-05-24 14:22:14 +0200 (+0200), Thierry Carrez wrote:
> [...]
> > we ship JARs already:
> > http://tarballs.openstack.org/ci/monasca-common/
> [...]
>
> Worth pointing out, those all have "SNAPSHOT" in their filenames
> which by Apache Maven convention indicates they're not official
> releases. Also they're only being hosted from our
> tarballs.openstack.org site, not published to the Maven Central
> Repository (the equivalent of DockerHub in this analogy).
>
> > That said, only a small fraction of our current OpenStack deliverables
> > are supported by the VMT and therefore properly security-maintained "by
> > the community" with strong guarantees and processes. So I don't see
> > adding such binary deliverables (maintained by their respective teams)
> > as a complete revolution. I'd expect the VMT to require a lot more
> > staffing (like dedicated people to track those deliverables content)
> > before they would consider those security-supported.
>
> The Kolla team _has_ expressed interest in attaining
> vulnerability:managed for at least some of their deliverables in the
> future, but exactly what that would look like from a coverage
> standpoint has yet to be ironed out. I don't expect we would
> actually cover general vulnerabilities present in any container
> images, and would only focus on direct vulnerabilities in the Kolla
> source repositories instead. Rather than extending the VMT to track
> vulnerable third-party software present in images, it's more likely
> the Kolla team would form their own notifications subgroup to track
> and communicate such risks downstream.
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] priorities for the coming week (05/26-06/01)

2017-05-26 Thread Brian Rosmaita
One new patch added for the Glanceclient release:
https://review.openstack.org/#/c/467592/

On Thu, May 25, 2017 at 12:23 PM, Brian Rosmaita
 wrote:
> As discussed at today's Glance meeting, here are the priorities for this week:
>
> 0  Glanceclient release
> We plan to cut a release of the python-glanceclient on Wednesday 31
> May.  There are two patches outstanding that would be good to have in
> the release.  They need to be merged by 12:00 UTC on Tuesday 30 May.
> Please give them your attention:
> - https://review.openstack.org/444104
> - https://review.openstack.org/445318
and https://review.openstack.org/#/c/467592/
>
> 1  WSGI community goal
> Matt Treinish has patches up implementing the "control plane API
> endpoints deployment via WSGI" community goal.  Let's get them
> reviewed and merged this week so they'll be in the P-2 milestone
> release:
> - 
> https://review.openstack.org/#/q/status:open+project:openstack/glance+branch:master+topic:goal-deploy-api-in-wsgi
> - https://review.openstack.org/#/c/459451/
>
> 2  Image import refactor
> Erno has two patches stalled if anyone has time to help troubleshoot:
> - https://review.openstack.org/#/c/391442/
> - https://review.openstack.org/#/c/443632/
>
> Other items:
> I'm proposing reinstating Mike Fedosin as a Glance core:
> - http://lists.openstack.org/pipermail/openstack-dev/2017-May/117489.html
> Please reply to the above message if you have comments or concerns.
>
>
> Have a productive week, and happy towel day!
> brian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [telemetry] gnocchi v4 preview

2017-05-26 Thread gordon chung
hi,

since i've been referencing this to a few people already, i've done some 
basic benchmarking on the upcoming gnocchi release which should be 
released in next few weeks. here is a short deck highlighting a few updates:

https://www.slideshare.net/GordonChung/gnocchi-v4-preview

if you have time to help test it[1] out, please feel free and give us 
some feedback. instructions to install from source are online[2].

if you have any questions, feel free to ping on #gnocchi or 
#openstack-telemetry in Freenode or on ML.

[1] https://github.com/gnocchixyz/gnocchi
[2] http://gnocchi.xyz/install.html#id1

cheers,

-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-05-26 Thread Lance Bragstad
On Fri, May 26, 2017 at 5:32 AM, Sean Dague  wrote:

> On 05/26/2017 03:44 AM, John Garbutt wrote:
> > +1 on not forcing Operators to transition to something new twice, even
> > if we did go for option 3.
> >
> > Do we have an agreed non-distruptive upgrade path mapped out yet? (For
> > any of the options) We spoke about fallback rules you pass but with a
> > warning to give us a smoother transition. I think that's my main
> > objection with the existing patches, having to tell all admins to get
> > their token for a different project, and give them roles in that
> > project, all before being able to upgrade.
>
> I definitely think the double migration is a good reason to just do this
> thing right the first time.
>
> My biggest real concern with is_admin_project (and the service project),
> is that it's not very explicit. It's mostly a way to trick the current
> plumbing into acting a different way. Which is fine if you are a
> deployer and need to create this behavior with the existing codebase you
> have. Which seems to have all come down to their being a
> misunderstanding of what Roles were back in 2012, and the two camps went
> off in different directions (roles really being project scoped, and
> roles meaning global).
>
> It would be really great if the inflated context we got was "roles: x,
> y, z, project_roles: q, r, s" (and fully accepting keystonemiddleware
> and oslo.context might be weaving some magic there). I honestly think
> that until we've got a very clear separation at that level, it's going
> to be really tough to get policy files in projects to be any more
> sensible in their defaults. Leaking is_admin_project all the way through
> to a service and having them have to consider it for their policy (which
> we do with the context today) definitely feels like a layer violation.
>

This is another good point. If we can ensure projects rely on oslo.context
to get scope information in a canonical form (like context.scope ==
'global' or context.scope == 'project') that might make consuming all this
easier. But it does require us to ensure oslo.context does the right thing
with various token types. I included some of that information in the spec
[0] but I didn't go into great detail. I thought about adding it to the
keystone spec but wasn't sure if that would be the right place for it.

[0] https://review.openstack.org/#/c/464763


>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3][HA] 2 masters after reboot of node

2017-05-26 Thread Kevin Benton
Just triggering a status change should just be handled as a port update on
the agent side which shouldn't interrupt any existing flows. So an l3 agent
reboot should be safe in this case.

On May 26, 2017 6:06 AM, "Anil Venkata"  wrote:

> On Fri, May 26, 2017 at 6:14 PM, Kevin Benton  wrote:
>
>> Perhaps when the L3 agent starts up we can have it explicitly set the
>> port status to DOWN for all of the HA ports on that node. Then we are
>> guaranteed that when they go to ACTIVE it will be because the L2 agent has
>> wired the ports.
>>
>>
> Thanks Kevin. Will it create dependency of dataplane on controlplane. For
> example, if the node is properly configured(l2 agent wired up, keepalived
> configured, VRRP exchange happening) but user just restarted only l3 agent,
> then with the suggestion we won't break l2 connectivity(leading to multiple
> HA masters) by re configuring again?
>
> Or is there a way server can detect that node(not only agent) is down and
> set port status?
>
>
>> On Fri, May 26, 2017 at 5:27 AM, Anil Venkata 
>> wrote:
>>
>>> This is regarding https://bugs.launchpad.net/neutron/+bug/1597461
>>> Earlier to fix this, we added code [1] to spawn keepalived only when HA
>>> network port status is active.
>>>
>>> But, on reboot, node will get HA network port's status as ACTIVE from
>>> server(please see comment [2]),
>>> though l2 agent might not have wired[3] the port, resulting in spawning
>>>  keepalived. Any suggestions
>>> how l3 agent can detect that l2 agent has not wired the port and
>>> then avoid spawning keepalived?
>>>
>>> [1] https://review.openstack.org/#/c/357458/
>>> [2] https://bugs.launchpad.net/neutron/+bug/1597461/comments/26
>>> [3] l2 agent wiring means setting up ovs flows on br-tun to make port
>>> usable
>>>
>>> Thanks
>>> Anilvenkata
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

2017-05-26 Thread John Griffith
On Thu, May 25, 2017 at 10:01 PM, zengchen  wrote:

>
> Hi john:
> I have seen your updates on the bp. I agree with your plan on how to
> develop the codes.
> However, there is one issue I have to remind you that at present, Fuxi
> not only can convert
>  Cinder volume to Docker, but also Manila file. So, do you consider to
> involve Manila part of codes
>  in the new Fuxi-golang?
>
​Agreed, that's a really good and important point.  Yes, I believe Ben
Swartzlander ​

​is interested, we can check with him and make sure but I certainly hope
that Manila would be interested.
​

> Besides, IMO, It is better to create a repository for Fuxi-golang, because
>  Fuxi is the project of Openstack,
>
​Yeah, that seems fine; I just didn't know if there needed to be any more
conversation with other folks on any of this before charing ahead on new
repos etc.  Doesn't matter much to me though.​


>
>Thanks very much!
>
> Best Wishes!
> zengchen
>
>
>
>
> At 2017-05-25 22:47:29, "John Griffith"  wrote:
>
>
>
> On Thu, May 25, 2017 at 5:50 AM, zengchen  wrote:
>
>> Very sorry to foget attaching the link for bp of rewriting Fuxi with go
>> language.
>> https://blueprints.launchpad.net/fuxi/+spec/convert-to-golang
>>
>>
>> At 2017-05-25 19:46:54, "zengchen"  wrote:
>>
>> Hi guys:
>> hongbin had committed a bp of rewriting Fuxi with go language[1]. My
>> question is where to commit codes for it.
>> We have two choice, 1. create a new repository, 2. create a new branch.
>> IMO, the first one is much better. Because
>> there are many differences in the layer of infrastructure, such as CI.
>> What's your opinion? Thanks very much
>>
>> Best Wishes
>> zengchen
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> Hi Zengchen,
>
> For now I was thinking just use Github and PR's outside of the OpenStack
> projects to bootstrap things and see how far we can get.  I'll update the
> BP this morning with what I believe to be the key tasks to work through.
>
> Thanks,
> John
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [fuxi][kuryr] Where to commit codes for Fuxi-golang

2017-05-26 Thread Zhipeng Huang
Ya we should combine all the related efforts :) #openstack-cinder would be
great place to chat, shall we set a time for the meeting ?

On Fri, May 26, 2017 at 11:40 AM, John Griffith 
wrote:

>
>
> On Thu, May 25, 2017 at 6:25 PM, Zhipeng Huang 
> wrote:
>
>> Hi John and Zeng,
>>
>> The OpenSDS community already developed a golang client for the
>> os-brick[1], I think we could host the new golang os-brick code there as a
>> new repo and after things settled port the code back to OpenStack
>>
>> [1]https://github.com/opensds/opensds/blob/master/pkg/dock/p
>> lugins/connector/connector.go
>>
>> On Thu, May 25, 2017 at 10:47 PM, John Griffith > > wrote:
>>
>>>
>>>
>>> On Thu, May 25, 2017 at 5:50 AM, zengchen  wrote:
>>>
 Very sorry to foget attaching the link for bp of rewriting Fuxi with go
 language.
 https://blueprints.launchpad.net/fuxi/+spec/convert-to-golang


 At 2017-05-25 19:46:54, "zengchen"  wrote:

 Hi guys:
 hongbin had committed a bp of rewriting Fuxi with go language[1].
 My question is where to commit codes for it.
 We have two choice, 1. create a new repository, 2. create a new
 branch.  IMO, the first one is much better. Because
 there are many differences in the layer of infrastructure, such as CI.
 What's your opinion? Thanks very much

 Best Wishes
 zengchen


 
 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.op
 enstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ​Hi Zengchen,
>>>
>>> For now I was thinking just use Github and PR's outside of the OpenStack
>>> projects to bootstrap things and see how far we can get.  I'll update the
>>> BP this morning with what I believe to be the key tasks to work through.
>>>
>>> Thanks,
>>> John​
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>> --
>> Zhipeng (Howard) Huang
>>
>> Standard Engineer
>> IT Standard & Patent/IT Product Line
>> Huawei Technologies Co,. Ltd
>> Email: huangzhip...@huawei.com
>> Office: Huawei Industrial Base, Longgang, Shenzhen
>>
>> (Previous)
>> Research Assistant
>> Mobile Ad-Hoc Network Lab, Calit2
>> University of California, Irvine
>> Email: zhipe...@uci.edu
>> Office: Calit2 Building Room 2402
>>
>> OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> ​I like the idea of the service version of brick, and then the golang
> bindings.  There's a lot of good investment already in os-brick that it
> would be great to leverage.  Walt and Ivan mentioned that they had a POC
> for this a while back, might be worth considering taking the fork
> referenced in the BP and submitting that upstream for the community?
>  #openstack-cinder IRC channel would be a great place to sync up on these
> aspects in real time if folks would like.
>
> Thanks,
> John​
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com
Office: Huawei Industrial Base, Longgang, Shenzhen

(Previous)
Research Assistant
Mobile Ad-Hoc Network Lab, Calit2
University of California, Irvine
Email: zhipe...@uci.edu
Office: Calit2 Building Room 2402

OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Is the pendulum swinging on PaaS layers?

2017-05-26 Thread Jay Pipes

On 05/26/2017 02:53 AM, Chris Friesen wrote:

On 05/19/2017 04:06 PM, Dean Troyer wrote:
On Fri, May 19, 2017 at 4:01 PM, Matt Riedemann  
wrote:
I'm confused by this. Creating a server takes a volume ID if you're 
booting
from volume, and that's actually preferred (by nova devs) since then 
Nova

doesn't have to orchestrate the creation of the volume in the compute
service and then poll until it's available.

Same for ports - nova can create the port (default action) or get a 
port at

server creation time, which is required if you're doing trunk ports or
sr-iov / fancy pants ports.

Am I misunderstanding what you're saying is missing?


It turns out those are bad examples, they do accept IDs.


I was actually suggesting that maybe these commands in nova should 
*only* take IDs, and that nova itself should not set up either block 
storage or networking for you.


It seems non-intuitive to me that nova will do some basic stuff for you, 
but if you want something more complicated then you need to go do it a 
totally different way.


It seems to me that it'd be more logical if we always set up 
volumes/ports first, then passed the resulting UUIDs to nova.  This 
could maybe be hidden from the end-user by doing it in the client or 
some intermediate layer, but arguably nova proper shouldn't be in the 
proxying business.


You are describing the porcelain API that we've been talking about. :)

Viva enamel!

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] placement/resource providers update 24

2017-05-26 Thread Chris Dent


Once more with a placement and resource providers update. This is
number 24.

# What Matters Most

Claims against the placement API remain the highest priorty. There's
plenty of other work in progress too which needs to advance. Lots of
links within.

# What's Changed

This week there were a few different discussions that eventually led
us to a hybrid solution for how claims are going to be managed:

* we'll make an initial claim in nova-scheduler (so that the claim
  happens as early as possible in the process)
* the scheduler will pass alternate hosts that in the same cell as
  primary hosts that it returns
* this is so conductors can retry to an already chosen alternate
  host and change allocations if there is last minute to failure to
  emplace a server on a host

There was a long thread on the operators list about some of this
stuff:


http://lists.openstack.org/pipermail/openstack-operators/2017-May/013464.html

http://lists.openstack.org/pipermail/openstack-operators/2017-May/013529.html

Some of the details of this remain to be worked out, but the code is
in progress starting at:

https://review.openstack.org/#/c/465171/

Do not be surprised if there are further changes to this model as we
discover more issues.

# Help Wanted

(This section _has_ changed since last time, removing some stuff that's
gone stale for now.)

Areas where volunteers are needed.

* General attention to bugs tagged placement:
  https://bugs.launchpad.net/nova/+bugs?field.tag=placement

  There are some new bugs that are relatively low hanging fruit:

  traits link in all microversions:
  https://bugs.launchpad.net/nova/+bug/1693353

  bad error message when failing to create existing resource
  provider:
  https://bugs.launchpad.net/nova/+bug/1693349

  placement install process not documented:
  https://bugs.launchpad.net/nova/+bug/1692375

* Helping to create api documentation for placement (see the Docs
  section below).

# Main Themes

## Claims in the Scheduler

Work has started on placement-claims blueprint:

 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/placement-claims

There are two tracks of work, but the one that is currently the
leading contender starts at

https://review.openstack.org/#/c/465171/

More info above in "What's Changed".

## Traits

The main API is in place. Debate raged on how best to manage updates
of standard os-traits. In the end a cache similar to the one used
for resource classes was created:

  https://review.openstack.org/#/c/462769/

Work will be required at some point on filtering resource providers
based on traits, and adding traits to resource providers from the
resource tracker. There's been some discussion on that in the
reviews of shared providers (below) because it will be a part of
the same mass (MASS!) of SQL.

## Shared Resource Providers

(There's been some progress on this track, but the links and
messages here remain the same.)

This work (when finished) makes it possible (amongst other things)
for use of shared disk resources to be tracked correctly.

 https://review.openstack.org/#/q/status:open+topic:bp/shared-resources-pike

Reviewers should be aware that the patches, at least as of today,
are structured in a way that evolves from the current state to the
eventual desired state in a way that duplicates some effort and
code. This was done intentionally by Jay to make the testing and
review more incremental. It's probably best to read through the
entire stack before jumping to any conclusions.

## Docs

https://review.openstack.org/#/q/topic:cd/placement-api-ref

and

https://review.openstack.org/#/q/status:open+topic:placement-api-ref-add-resource-classes-put

Large number of reviews are in progress for documenting the placement API.
It's great to see the progress and when looking at the draft rendered docs
it makes placement feel like a real thing™.

Find me (cdent) or Andrey (avolkov) if you want to help out or have
other questions.

## Nested Resource Providers

(This section has not changed since last week)

On hold while attention is given to traits and claims. There's a
stack of code waiting until all of that settles:


https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/nested-resource-providers

## Ironic/Custom Resource Classes

(This section has not changed since last week)

There's a blueprint for "custom resource classes in flavors" that
describes the stuff that will actually make use of custom resource
classes:


https://blueprints.launchpad.net/nova/+spec/custom-resource-classes-in-flavors

The spec has merged, but the implementation has not yet started.

Over in Ironic some functional and integration tests have started:

https://review.openstack.org/#/c/443628/

# Other Code/Specs

* https://review.openstack.org/#/q/project:openstack/osc-placement
   Work has started on an osc-plugin that can provide a command
   

Re: [openstack-dev] [neutron][L3][HA] 2 masters after reboot of node

2017-05-26 Thread Anil Venkata
On Fri, May 26, 2017 at 6:14 PM, Kevin Benton  wrote:

> Perhaps when the L3 agent starts up we can have it explicitly set the port
> status to DOWN for all of the HA ports on that node. Then we are guaranteed
> that when they go to ACTIVE it will be because the L2 agent has wired the
> ports.
>
>
Thanks Kevin. Will it create dependency of dataplane on controlplane. For
example, if the node is properly configured(l2 agent wired up, keepalived
configured, VRRP exchange happening) but user just restarted only l3 agent,
then with the suggestion we won't break l2 connectivity(leading to multiple
HA masters) by re configuring again?

Or is there a way server can detect that node(not only agent) is down and
set port status?


> On Fri, May 26, 2017 at 5:27 AM, Anil Venkata 
> wrote:
>
>> This is regarding https://bugs.launchpad.net/neutron/+bug/1597461
>> Earlier to fix this, we added code [1] to spawn keepalived only when HA
>> network port status is active.
>>
>> But, on reboot, node will get HA network port's status as ACTIVE from
>> server(please see comment [2]),
>> though l2 agent might not have wired[3] the port, resulting in spawning
>>  keepalived. Any suggestions
>> how l3 agent can detect that l2 agent has not wired the port and
>> then avoid spawning keepalived?
>>
>> [1] https://review.openstack.org/#/c/357458/
>> [2] https://bugs.launchpad.net/neutron/+bug/1597461/comments/26
>> [3] l2 agent wiring means setting up ovs flows on br-tun to make port
>> usable
>>
>> Thanks
>> Anilvenkata
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [publiccloud-wg] Summary meeting 24th May

2017-05-26 Thread Tobias Rydberg

Hi group,

Wednesday we had a meeting with the Public Cloud Working Group, in this 
email I will give you a quick summary of that meeting, to keep everyone 
informed and hopefully keep up the interest that we all felt in Boston! =)


#start meeting-summary publiccloud-wg

1. Upstream work
A lot of discussions in Boston about "upstream friday" or such, would be 
awesome to see more upstream work from public clouds, even though a lot 
of them are small companies. A few have already started this, and more 
will come.


2. Missing features
Our list of "missing features" got a lot of attention in Boston. We will 
continue to work on this list, zhipeng will go over it during the 
upcoming weeks to identify things that are done, planned for upcoming 
release etc etc.*We encourage you all to review that lis**t*, both to 
help out with correct tagging and formulation, but feel free to add more 
to the list as well. This list will be used for prioritization for the 
current cycle, so please do it right away.


( 
https://docs.google.com/spreadsheets/d/1Mf8OAyTzZxCKzYHMgBl-QK_2-XSycSkOjqCyMTIedkA 
)


3. #openstack-publiccloud
To get the discussions going between meetings and to encourage people 
that are not in the meetings to join the discussions we would like to 
promote the IRC channel #openstack-publiccloud. We all will be there and 
it will be even more important when it comes to implementations of the 
missing features, to help each other as much as possible.


4. Goals for Sydney
*Important: *At next meeting 7th June we will set the goals for the 
current cycle. We would like to ask you all to give it some thinking and 
either give us your input here at the mailing list, or in the IRC 
channel before that, and best of all, if you could participate at the 
meeting as well.


Some ideas exists, implementation of missing features, public cloud 
meetup are some of them.


#end meeting-summary

Talk to you all soon!

Best regards,
PublicCloudWG via tobberydberg

--
Tobias Rydberg
tob...@citynetwork.se


smime.p7s
Description: S/MIME Cryptographic Signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][L3][HA] 2 masters after reboot of node

2017-05-26 Thread Kevin Benton
Perhaps when the L3 agent starts up we can have it explicitly set the port
status to DOWN for all of the HA ports on that node. Then we are guaranteed
that when they go to ACTIVE it will be because the L2 agent has wired the
ports.

On Fri, May 26, 2017 at 5:27 AM, Anil Venkata 
wrote:

> This is regarding https://bugs.launchpad.net/neutron/+bug/1597461
> Earlier to fix this, we added code [1] to spawn keepalived only when HA
> network port status is active.
>
> But, on reboot, node will get HA network port's status as ACTIVE from
> server(please see comment [2]),
> though l2 agent might not have wired[3] the port, resulting in spawning
>  keepalived. Any suggestions
> how l3 agent can detect that l2 agent has not wired the port and
> then avoid spawning keepalived?
>
> [1] https://review.openstack.org/#/c/357458/
> [2] https://bugs.launchpad.net/neutron/+bug/1597461/comments/26
> [3] l2 agent wiring means setting up ovs flows on br-tun to make port
> usable
>
> Thanks
> Anilvenkata
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev] [Designate]

2017-05-26 Thread Andrea Frittoli
On Thu, May 25, 2017 at 5:27 PM Carmine Annunziata <
carmine.annunziat...@gmail.com> wrote:

> Hi everyone,
> I integrated Designate in Openstack by devstack, now i'm using the new
> default commands like zone create/delete, etc. I got this problem:
>
>
> +--+---+-++++
> | id   | name  | type|
> serial | status | action |
>
> +--+---+-++++
> | fd80ab02-c8d7-4b06-9a4a-6026c3ca7a1c | example1.net. | PRIMARY |
> 1495725108 | ERROR  | DELETE |
> | bfd19d7d-b259-4f88-afe0-b56d21d09ebe | example2.net. | PRIMARY |
> 1495726327 | ERROR  | DELETE |
> | a33bf1fc-0112-48de-8f43-51c34845f011 | example1.com. | PRIMARY |
> 1495727006 | ERROR  | CREATE |
>
> +--+---+-++++
>

Hello Carmine,

I'm not an expect in Designate, but I can recommend you try to have a look
in designate logs to see why your zone creation fails.
If you don't find your answer there you could post relevant cli commands,
log fragments and configuration to a publicly available place (e.g. [1])
and ask the ML again.

andrea

[1] https://paste.openstack.org

>
> Same on dashboard.
>
> Cheers,
> --
> Carmine
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][L3][HA] 2 masters after reboot of node

2017-05-26 Thread Anil Venkata
This is regarding https://bugs.launchpad.net/neutron/+bug/1597461
Earlier to fix this, we added code [1] to spawn keepalived only when HA
network port status is active.

But, on reboot, node will get HA network port's status as ACTIVE from
server(please see comment [2]),
though l2 agent might not have wired[3] the port, resulting in spawning
 keepalived. Any suggestions
how l3 agent can detect that l2 agent has not wired the port and then avoid
spawning keepalived?

[1] https://review.openstack.org/#/c/357458/
[2] https://bugs.launchpad.net/neutron/+bug/1597461/comments/26
[3] l2 agent wiring means setting up ovs flows on br-tun to make port usable

Thanks
Anilvenkata
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Newton 14.0.7 and Ocata 15.0.5 releases

2017-05-26 Thread Matt Riedemann
This is a follow up to an email from Melanie Witt [1] calling attention 
to a high severity performance regression identified in Newton. That 
change is merged and the fix will be in the Ocata 15.0.5 release [2] and 
Newton 14.0.7 release [3].


Those releases will also contain a fix for a bug where we didn't 
properly handle special characters in the database connection URL when 
running the simple_cell_setup or map_cell0 commands, which are used when 
setting up cells v2 (optional in Newton, required in Ocata).


Finally, the Newton release is also going to include a fix to generate 
the cell0 database connection URL to use the 'nova_cell0' database name 
rather than 'nova_api_cell0' if you allow Nova to generate the name 
(note that Nova doesn't create the database, that's your job). Again, 
setting up cells v2 is optional in Newton but people have been getting 
started with it there and some people have hit this. This fix doesn't 
help anyone that has already upgraded, but is there for people which 
haven't done it yet (which I'm assuming is the majority).


Details are in the release notes for both:

https://docs.openstack.org/releasenotes/nova/newton.html

https://docs.openstack.org/releasenotes/nova/ocata.html

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117132.html
[2] https://review.openstack.org/#/c/468388/
[3] https://review.openstack.org/#/c/468387/

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mistral][freezer] adopting oslo.context for logging debugging and tracing

2017-05-26 Thread Saad Zaher
Hi Doug,

Thanks for your review. Actually freezer has a separate repo for the api,
it can be found here [1]. Freezer is using oslo.context since newton. If
you have the time you can take a look at it and let us know if you have any
comments.

Thanks for your help

[1] https://github.com/openstack/freezer-api

Best Regards,
Saad!

On Fri, May 26, 2017 at 5:45 AM, Renat Akhmerov 
wrote:

> Thanks Doug. We’ll look into this.
>
> @Tuan, is there any roadblocks with the patch you’re working on? [1]
>
> [1] https://review.openstack.org/#/c/455407/
>
>
> Renat
>
> On 26 May 2017, 01:54 +0700, Doug Hellmann , wrote:
>
> The new work to add the exception information and request ID tracing
> depends on using both oslo.context and oslo.log to have all of the
> relevant pieces of information available as log messages are emitted.
>
> In the course of reviewing the "done" status for those initiatives,
> I noticed that although mistral and freezer are using oslo.log,
> neither uses oslo.context. That means neither project will get the
> extra debugging information, and neither project will see the global
> request ID in logs.
>
> I started looking at updating mistral's context to use oslo.context
> as a base class, but ran into some issues because of some extensions
> made to the existing class. I wasn't able to find where freezer is
> doing anything at all with an API request context.
>
> I'm available to help, if someone else wants to pick up the work.
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
--
Best Regards,
Saad!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [nova] [glance] [neutron] [cinder] Global Request ID progress - python-cinderclient the first across the line!

2017-05-26 Thread Sean Dague
Major Updates:

The Cinder team was really responsive with reviews yesterday and getting
out a new python-cinderclient, so cinder is now the first project fully
plumbed to accept global_request_id. Woot!

On the Nova team we managed to resolve some long standing issues with
the way that we consume oslo.context that means that Nova testing were
breaking any time a new attribute was added. That unblocked getting the
new version of oslo.context into upper-constraints.

What Remains:

For those following along at home, here is the work as I'm tracking it
in org-mode.


** oslo.context (DONE)
*** Changes Landed - https://review.openstack.org/#/c/464326/,
https://review.openstack.org/#/c/467584/
*** Library Released
*** DONE Included in upper-constraints - blocked by Nova unit tests -
https://review.openstack.org/#/c/467812/
CLOSED: [2017-05-25 Thu 14:38]

** oslo.middleware (DONE)
*** Changes Landed - https://review.openstack.org/#/c/467135/
*** Library Released
*** DONE Library in upper-constraints -
https://review.openstack.org/#/c/467813/
CLOSED: [2017-05-25 Thu 13:26]

** python-glanceclient
*** TODO land global_request_id change -
https://review.openstack.org/#/c/467592/
*** TODO release library
*** TODO get library version in upper-constraints

** python-neutronclient
*** TODO land global_request_id change -
https://review.openstack.org/#/c/467243/
*** TODO release library
*** TODO get library version in upper-constraints

** python-cinderclient (DONE)
*** DONE land global_request_id change -
https://review.openstack.org/#/c/467642/
CLOSED: [2017-05-25 Thu 13:18]
*** DONE release library - https://review.openstack.org/468109
CLOSED: [2017-05-26 Fri 06:39]
*** DONE get library version in upper-constraints -
https://review.openstack.org/#/c/468145/
CLOSED: [2017-05-26 Fri 06:43]

** python-novaclient
*** TODO land global_request_id change - https://review.openstack.org/468126
*** TODO release library
*** TODO get library version in upper-constraints

** devstack
*** TODO add logging for global_id -
https://review.openstack.org/#/c/467372/

** nova
*** DONE unblock updated oslo.context -
https://review.openstack.org/#/c/467995/
CLOSED: [2017-05-25 Thu 14:38]
*** DONE spec for oslo.middleware usage -
https://review.openstack.org/#/c/468066/
CLOSED: [2017-05-26 Fri 06:39]
*** TODO use oslo.middleware for request_id -
https://review.openstack.org/#/c/467998/
*** TODO Add setting global_id for cinderclient -
https://review.openstack.org/#/c/468355/
*** TODO Add setting global_id for all client calls -
https://review.openstack.org/#/c/467242/

** neutron
*** TODO Add setting global_id for all client calls

** cinder
*** TODO Add setting global_id for all client calls

** glance
*** TODO use oslo.middleware for request_id
*** TODO Add setting global_id for all client calls


-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-05-26 Thread Sean Dague
On 05/26/2017 03:44 AM, John Garbutt wrote:
> +1 on not forcing Operators to transition to something new twice, even
> if we did go for option 3.
> 
> Do we have an agreed non-distruptive upgrade path mapped out yet? (For
> any of the options) We spoke about fallback rules you pass but with a
> warning to give us a smoother transition. I think that's my main
> objection with the existing patches, having to tell all admins to get
> their token for a different project, and give them roles in that
> project, all before being able to upgrade.

I definitely think the double migration is a good reason to just do this
thing right the first time.

My biggest real concern with is_admin_project (and the service project),
is that it's not very explicit. It's mostly a way to trick the current
plumbing into acting a different way. Which is fine if you are a
deployer and need to create this behavior with the existing codebase you
have. Which seems to have all come down to their being a
misunderstanding of what Roles were back in 2012, and the two camps went
off in different directions (roles really being project scoped, and
roles meaning global).

It would be really great if the inflated context we got was "roles: x,
y, z, project_roles: q, r, s" (and fully accepting keystonemiddleware
and oslo.context might be weaving some magic there). I honestly think
that until we've got a very clear separation at that level, it's going
to be really tough to get policy files in projects to be any more
sensible in their defaults. Leaking is_admin_project all the way through
to a service and having them have to consider it for their policy (which
we do with the context today) definitely feels like a layer violation.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements][tripleo][heat] Projects holding back requirements updates

2017-05-26 Thread Steven Hardy
On Thu, May 25, 2017 at 03:15:13PM -0500, Ben Nemec wrote:
> Tagging with tripleo and heat teams for the os-*-config projects.  I'm not
> sure which owns them now, but it sounds like we need a new release.

I think they're still owned by the TripleO team, but we missed them in the
pike-1 release, I pushed this patch aiming to resolve that:

https://review.openstack.org/#/c/468292/

Steve

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [diskimage-builder] Restarting bi-weekly meeting

2017-05-26 Thread Yolanda Robla Mota
This time is super complicated to me , as it's night here. Ideally i'd
prefer early morning on UTC time, but I understand this may be a problem
for people in another timezones.

On Fri, May 26, 2017 at 7:42 AM, Ian Wienand  wrote:

> Hi,
>
> We've let this meeting [1] lapse, to our communications detriment.  I
> will restart it, starting next week [2].  Of course agenda items are
> welcome, otherwise we will use it as a session to make sure patches
> are moving in the right direction.
>
> If the matter is urgent, and not getting attention, an agenda item in
> the weekly infra meeting would be appropriate.
>
> Ping me off list if you're interested but this time doesn't work.  If
> there's a few, we can move it.
>
> Thanks,
>
> -i
>
> [1] http://eavesdrop.openstack.org/#Diskimage_Builder_Team_Meeting
> [2] https://review.openstack.org/468270
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

Yolanda Robla Mota

Principal Software Engineer, RHCE

Red Hat



C/Avellana 213

Urb Portugal

yrobl...@redhat.comM: +34605641639


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] Status update, May 26

2017-05-26 Thread Thierry Carrez
Hi!

This new regular email will give you an update on the status of a number
of TC-proposed governance changes, in an attempt to rely less on a
weekly meeting to convey that information.

You can find the full status list of open topics at:
https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee


== Recently-approved changes ==

* Stop *requiring* IRC meetings for project teams [1]
* Add Queens goal: split out tempest plugins [2]
* Update the language in the "4 opens" around design summits [3]

[1] https://review.openstack.org/#/c/462077/
[2] https://review.openstack.org/#/c/369749/
https://review.openstack.org/#/c/454070/

The Stackube and Gluon project proposals have been frozen by their
proposers, as the formers need to be set up on OpenStack infrastructure
(mentored by dims), and the latter need to engage in further discussions
with Neutron. To that effect, the Gluon team could use a TC member
mentor/sponsor to help them navigate the OpenStack seas. Any volunteer ?


== Open discussions ==

Two changes were recently proposed and are in the initial phases of
discussion/refinement:

* Introduce Top 5 help wanted list [3]
* Etcd as a base service [4]

We also have proposal(s) up to introduce the new office hours:

* Introduce office hours [5]
* Establish a #openstack-tc water cooler channel [6]

Please jump in there if you have an opinion.

[3] https://review.openstack.org/#/c/466684/
[4] https://review.openstack.org/#/c/467436/
[5] https://review.openstack.org/#/c/467256/
[6] https://review.openstack.org/#/c/467386/


== Voting in progress ==

We have two items that now seem ready for voting:

* How to propose changes: do not mention meetings [7]
* Document voting process for `formal-vote` patches [8]

Note that the second one (being a charter change) requires a 2/3 majority.

[7] https://review.openstack.org/#/c/467255/
[8] https://review.openstack.org/#/c/463141/


== Blocked items ==

The discussion around postgresql support in OpenStack is still very much
going on (with 2 slightly-different proposals), and energy is running
low on pushing subsequent refinement patchsets.

* Declare plainly the current state of Posgresql in OpenStack [9]
* Document lack of postgresql support [10]

[9] https://review.openstack.org/#/c/427880/
[10] https://review.openstack.org/465589

A specific meeting might be necessary to converge on a way forward, see
below. Feel free to jump in there (or on the ML discussion at
http://lists.openstack.org/pipermail/openstack-dev/2017-May/116642.html)


== TC member actions for the coming week(s) ==

johnthetubaguy, cdent, dtroyer to continue distill TC vision feedback
into actionable points (and split between cosmetic and significant
changes) [https://review.openstack.org/453262]

johnthetubaguy to update "Describe what upstream support means" with a
new revision [https://review.openstack.org/440601]

johnthetubaguy to update "Decisions should be globally inclusive" with a
new revision [https://review.openstack.org/460946]

flaper87 to update "Drop Technical Committee meetings" with a new
revision [https://review.openstack.org/459848]

dhellmann to post proposal for a policy on binary images publication,
following the thread at
http://lists.openstack.org/pipermail/openstack-dev/2017-May/116677.html

ttx to refresh 2017 contributor attrition stats and report back.


== Need for a TC meeting next Tuesday ==

Based on the current status on the postgresql discussion, a meeting
(with a narrow agenda) could be useful to unblock the situation. A
thread on the openstack-tc list will be posted to see if it's a good
idea, and final decision on the meeting will be taken on Monday.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron]VM packet lost on qbr bridge

2017-05-26 Thread Kevin Benton
Is it being dropped by iptables? Check the following:

* Your VM using the correct IPv6 address assigned to it by Neutron
* The security group associated with the port allows outbound IPv6 traffic
* ensure bridge-nf-call-ip6tables is set to 1 in the kernel of the compute
node

On Thu, May 25, 2017 at 9:28 PM, zhouzhih...@cmss.chinamobile.com <
zhouzhih...@cmss.chinamobile.com> wrote:

> Hi All,
>
>
> I met a problem of dns packet lost in openstack environment. I need help.
>
> The problem desc:
>
> In the vm, I use "curl http://www.baidu.com;  to do
> test. And the cmd takes almost 5 seconds return.
> So I use tcpdump to capture the packets. And found a dns packet lost which
> is  packet for ipv6.
>
> The packet-lost-on-qbr.zip are the captured packets which is from qbr to
> qvb. It clearly shows the lost packet.
>
> In my another openstack environment, the problem can not occur.  The 
> qbr-no-packet-lost.zip
> is the captured packets.
>
> Thanks
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] AttributeError: 'StackManager' object has no attribute 'event'

2017-05-26 Thread Heenashree Khandelwal
Hi Team,

I am trying below code but the moment I run the code I am getting error.

try:
evntsdata = hc.stacks.event.list(stack_name=stack_name)

#evntsdata = hc.stacks.events.list(stack_name)

if evntsdata[-1] == 'UPDATE_IN_PROGRESS':
loopcontinue = True
while loopcontinue:
  evntsdata = (hc.stacks.event.list(stack_name)[0]).split(" ")
  if (evntsdata[-1] == 'UPDATE_COMPLETE'):
loopcontinue = False
print(str(timestamp()) + " " + "Stack Update is Completed!" + ' - ' +
evntsdata[-3] + ' = ' + evntsdata[-1])
  else:
print(str(timestamp()) + " " + "Stack Update in Progress!" + ' - ' +
evntsdata[-3] + ' = ' + evntsdata[-1])
time.sleep(10)
elif error_state == 'No updates are to be performed':
  exit(0)

except AttributeError as e:
  print(str(timestamp()) + " " + "ERROR: Stack Update Failure")
  raise


ERROR: Traceback (most recent call last):
  File
"C:/Users/hekh0517/PycharmProjects/untitled1/testingBotoStackUpdate.py",
line 86, in 
evntsdata = hc.stacks.event.list(stack_name=stack_name)
AttributeError: 'StackManager' object has no attribute 'event'

I dont know why this error occurs
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-26 Thread John Garbutt
On Mon, 22 May 2017 at 10:43, Alexandra Settle  wrote:
> We could also view option 1 as the longer-term goal,
> and option 2 as an incremental step toward it

+1 doing option 2 then option 1.
It just seems a good way to split up the work.

Thanks,
johnthetubaguy
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-05-26 Thread John Garbutt
+1 on not forcing Operators to transition to something new twice, even if
we did go for option 3.

Do we have an agreed non-distruptive upgrade path mapped out yet? (For any
of the options) We spoke about fallback rules you pass but with a warning
to give us a smoother transition. I think that's my main objection with the
existing patches, having to tell all admins to get their token for a
different project, and give them roles in that project, all before being
able to upgrade.

Thanks,
johnthetubaguy

On Fri, 26 May 2017 at 08:09, Belmiro Moreira <
moreira.belmiro.email.li...@gmail.com> wrote:

> Hi,
> thanks for bringing this into discussion in the Operators list.
>
> Option 1 and 2 and not complementary but complety different.
> So, considering "Option 2" and the goal to target it for Queens I would
> prefer not going into a migration path in
> Pike and then again in Queens.
>
> Belmiro
>
> On Fri, May 26, 2017 at 2:52 AM, joehuang  wrote:
>
>> I think a option 2 is better.
>>
>> Best Regards
>> Chaoyi Huang (joehuang)
>> --
>> *From:* Lance Bragstad [lbrags...@gmail.com]
>> *Sent:* 25 May 2017 3:47
>> *To:* OpenStack Development Mailing List (not for usage questions);
>> openstack-operat...@lists.openstack.org
>> *Subject:* Re: [openstack-dev]
>> [keystone][nova][cinder][glance][neutron][horizon][policy] defining
>> admin-ness
>>
>> I'd like to fill in a little more context here. I see three options with
>> the current two proposals.
>>
>> *Option 1*
>>
>> Use a special admin project to denote elevated privileges. For those
>> unfamiliar with the approach, it would rely on every deployment having an
>> "admin" project defined in configuration [0].
>>
>> *How it works:*
>>
>> Role assignments on this project represent global scope which is denoted
>> by a boolean attribute in the token response. A user with an 'admin' role
>> assignment on this project is equivalent to the global or cloud
>> administrator. Ideally, if a user has a 'reader' role assignment on the
>> admin project, they could have access to list everything within the
>> deployment, pending all the proper changes are made across the various
>> services. The workflow requires a special project for any sort of elevated
>> privilege.
>>
>> Pros:
>> - Almost all the work is done to make keystone understand the admin
>> project, there are already several patches in review to other projects to
>> consume this
>> - Operators can create roles and assign them to the admin_project as
>> needed after the upgrade to give proper global scope to their users
>>
>> Cons:
>> - All global assignments are linked back to a single project
>> - Describing the flow is confusing because in order to give someone
>> global access you have to give them a role assignment on a very specific
>> project, which seems like an anti-pattern
>> - We currently don't allow some things to exist in the global sense (i.e.
>> I can't launch instances without tenancy), the admin project could own
>> resources
>> - What happens if the admin project disappears?
>> - Tooling or scripts will be written around the admin project, instead of
>> treating all projects equally
>>
>> *Option 2*
>>
>> Implement global role assignments in keystone.
>>
>> *How it works:*
>>
>> Role assignments in keystone can be scoped to global context. Users can
>> then ask for a globally scoped token
>>
>> Pros:
>> - This approach represents a more accurate long term vision for role
>> assignments (at least how we understand it today)
>> - Operators can create global roles and assign them as needed after the
>> upgrade to give proper global scope to their users
>> - It's easier to explain global scope using global role assignments
>> instead of a special project
>> - token.is_global = True and token.role = 'reader' is easier to
>> understand than token.is_admin_project = True and token.role = 'reader'
>> - A global token can't be associated to a project, making it harder for
>> operations that require a project to consume a global token (i.e. I
>> shouldn't be able to launch an instance with a globally scoped token)
>>
>> Cons:
>> - We need to start from scratch implementing global scope in keystone,
>> steps for this are detailed in the spec
>>
>> *Option 3*
>>
>> We do option one and then follow it up with option two.
>>
>> *How it works:*
>>
>> We implement option one and continue solving the admin-ness issues in
>> Pike by helping projects consume and enforce it. We then target the
>> implementation of global roles for Queens.
>>
>> Pros:
>> - If we make the interface in oslo.context for global roles consistent,
>> then consuming projects shouldn't know the difference between using the
>> admin_project or a global role assignment
>>
>> Cons:
>> - It's more work and we're already strapped for resources
>> - We've told operators that the admin_project is a thing but after Queens
>> they will be able to do real global role assignments, so they should now
>> 

Re: [openstack-dev] [TripleO] Undercloud backup and restore

2017-05-26 Thread Carlos Camacho Gonzalez
On Wed, May 24, 2017 at 3:17 PM, Jiří Stránský  wrote:

> On 24.5.2017 15:02, Marios Andreou wrote:
>
>> On Wed, May 24, 2017 at 10:26 AM, Carlos Camacho Gonzalez <
>> ccama...@redhat.com> wrote:
>>
>> Hey folks,
>>>
>>> Based on what we discussed yesterday in the TripleO weekly team meeting,
>>> I'll like to propose a blueprint to create 2 features, basically to
>>> backup
>>> and restore the Undercloud.
>>>
>>> I'll like to follow in the first iteration the available docs for this
>>> purpose [1][2].
>>>
>>> With the addition of backing up the config files on /etc/ specifically to
>>> be able to recover from a failed Undercloud upgrade, i.e. recover the
>>> repos
>>> info removed in [3].
>>>
>>> I'll like to target this for P as I think I have enough time for
>>> coding/testing these features.
>>>
>>> I already have created a blueprint to track this effort
>>> https://blueprints.launchpad.net/tripleo/+spec/undercloud-backup-restore
>>>
>>> What do you think about it?
>>>
>>>
>> +1 from me as you know but adding my support on the list too. I think it
>> is
>> a great idea - there are cases especially around changing network config
>> during an upgrade for example where the best remedy is to restore the
>> undercloud for the network definitions (both neutron config and heat db).
>>
>
> +1 i think there's not really an easy way out of these issues other than a
> restore. We already recommend doing a backup before upgrading [1], so
> having something that can further help operators in this regard would be
> good.
>
>
Hey Jiri, Marios!!!

Sorry for the delayed reply here.

Yeahp, that's what I'll like to achieve, more than recover the UC, this is
about to restore it to a point we know it was working correctly.

Now, I'm following Ben's advise for this new iteration, taking a part the
big method which is currently meking the backup into small ones for
finishing the unit thests.

Cheers,
Carlos.

Jirka
>
> [1] http://tripleo.org/post_deployment/upgrade.html
>
>
>
>> thanks,
>>
>>
>>
>>> Thanks,
>>> Carlos.
>>>
>>> [1]: https://access.redhat.com/documentation/en-us/red_hat_
>>> enterprise_linux_openstack_platform/7/html/back_up_and_
>>> restore_red_hat_enterprise_linux_openstack_platform/restore
>>>
>>> [2]: https://docs.openstack.org/developer/tripleo-docs/post_
>>> deployment/backup_restore_undercloud.html
>>>
>>> [3]: https://docs.openstack.org/developer/tripleo-docs/
>>> installation/updating.html
>>>
>>>
>>> 
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.op
>>> enstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Undercloud backup and restore

2017-05-26 Thread Carlos Camacho Gonzalez
Hey Yolanda!

Thanks for your feedback, that's partially the idea of these 2 features,
right now they are targeting the Undercloud (backup and restore), but yes,
we should point something similar to the OC.

Cheers,
Carlos.

On Wed, May 24, 2017 at 10:10 AM, Yolanda Robla Mota 
wrote:

> Hi Carlos
> A common request that i hear, is that customers need a way to rollback or
> downgrade a system after an upgrade. So that will be useful of course. What
> about the overcloud, are you considering that possibility? If they find
> that an upgrade on a controller node breaks something for example.
>
> On Wed, May 24, 2017 at 9:26 AM, Carlos Camacho Gonzalez <
> ccama...@redhat.com> wrote:
>
>> Hey folks,
>>
>> Based on what we discussed yesterday in the TripleO weekly team meeting,
>> I'll like to propose a blueprint to create 2 features, basically to backup
>> and restore the Undercloud.
>>
>> I'll like to follow in the first iteration the available docs for this
>> purpose [1][2].
>>
>> With the addition of backing up the config files on /etc/ specifically to
>> be able to recover from a failed Undercloud upgrade, i.e. recover the repos
>> info removed in [3].
>>
>> I'll like to target this for P as I think I have enough time for
>> coding/testing these features.
>>
>> I already have created a blueprint to track this effort
>> https://blueprints.launchpad.net/tripleo/+spec/undercloud-backup-restore
>>
>> What do you think about it?
>>
>> Thanks,
>> Carlos.
>>
>> [1]: https://access.redhat.com/documentation/en-us/red_hat_enterp
>> rise_linux_openstack_platform/7/html/back_up_and_restore_
>> red_hat_enterprise_linux_openstack_platform/restore
>>
>> [2]: https://docs.openstack.org/developer/tripleo-docs/post_deplo
>> yment/backup_restore_undercloud.html
>>
>> [3]: https://docs.openstack.org/developer/tripleo-docs/installati
>> on/updating.html
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
>
> Yolanda Robla Mota
>
> Principal Software Engineer, RHCE
>
> Red Hat
>
> 
>
> C/Avellana 213
>
> Urb Portugal
>
> yrobl...@redhat.comM: +34605641639
> 
> 
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-05-26 Thread Belmiro Moreira
Hi,
thanks for bringing this into discussion in the Operators list.

Option 1 and 2 and not complementary but complety different.
So, considering "Option 2" and the goal to target it for Queens I would
prefer not going into a migration path in
Pike and then again in Queens.

Belmiro


On Fri, May 26, 2017 at 2:52 AM, joehuang  wrote:

> I think a option 2 is better.
>
> Best Regards
> Chaoyi Huang (joehuang)
> --
> *From:* Lance Bragstad [lbrags...@gmail.com]
> *Sent:* 25 May 2017 3:47
> *To:* OpenStack Development Mailing List (not for usage questions);
> openstack-operat...@lists.openstack.org
> *Subject:* Re: [openstack-dev] [keystone][nova][cinder][
> glance][neutron][horizon][policy] defining admin-ness
>
> I'd like to fill in a little more context here. I see three options with
> the current two proposals.
>
> *Option 1*
>
> Use a special admin project to denote elevated privileges. For those
> unfamiliar with the approach, it would rely on every deployment having an
> "admin" project defined in configuration [0].
>
> *How it works:*
>
> Role assignments on this project represent global scope which is denoted
> by a boolean attribute in the token response. A user with an 'admin' role
> assignment on this project is equivalent to the global or cloud
> administrator. Ideally, if a user has a 'reader' role assignment on the
> admin project, they could have access to list everything within the
> deployment, pending all the proper changes are made across the various
> services. The workflow requires a special project for any sort of elevated
> privilege.
>
> Pros:
> - Almost all the work is done to make keystone understand the admin
> project, there are already several patches in review to other projects to
> consume this
> - Operators can create roles and assign them to the admin_project as
> needed after the upgrade to give proper global scope to their users
>
> Cons:
> - All global assignments are linked back to a single project
> - Describing the flow is confusing because in order to give someone global
> access you have to give them a role assignment on a very specific project,
> which seems like an anti-pattern
> - We currently don't allow some things to exist in the global sense (i.e.
> I can't launch instances without tenancy), the admin project could own
> resources
> - What happens if the admin project disappears?
> - Tooling or scripts will be written around the admin project, instead of
> treating all projects equally
>
> *Option 2*
>
> Implement global role assignments in keystone.
>
> *How it works:*
>
> Role assignments in keystone can be scoped to global context. Users can
> then ask for a globally scoped token
>
> Pros:
> - This approach represents a more accurate long term vision for role
> assignments (at least how we understand it today)
> - Operators can create global roles and assign them as needed after the
> upgrade to give proper global scope to their users
> - It's easier to explain global scope using global role assignments
> instead of a special project
> - token.is_global = True and token.role = 'reader' is easier to understand
> than token.is_admin_project = True and token.role = 'reader'
> - A global token can't be associated to a project, making it harder for
> operations that require a project to consume a global token (i.e. I
> shouldn't be able to launch an instance with a globally scoped token)
>
> Cons:
> - We need to start from scratch implementing global scope in keystone,
> steps for this are detailed in the spec
>
> *Option 3*
>
> We do option one and then follow it up with option two.
>
> *How it works:*
>
> We implement option one and continue solving the admin-ness issues in Pike
> by helping projects consume and enforce it. We then target the
> implementation of global roles for Queens.
>
> Pros:
> - If we make the interface in oslo.context for global roles consistent,
> then consuming projects shouldn't know the difference between using the
> admin_project or a global role assignment
>
> Cons:
> - It's more work and we're already strapped for resources
> - We've told operators that the admin_project is a thing but after Queens
> they will be able to do real global role assignments, so they should now
> migrate *again*
> - We have to support two paths for solving the same problem in keystone,
> more maintenance and more testing to ensure they both behave exactly the
> same way
>   - This can get more complicated for projects dedicated to testing policy
> and RBAC, like Patrole
>
>
> Looking for feedback here as to which one is preferred given timing and
> payoff, specifically from operators who would be doing the migrations to
> implement and maintain proper scope in their deployments.
>
> Thanks for reading!
>
>
> [0] https://github.com/openstack/keystone/blob/
> 3d033df1c0fdc6cc9d2b02a702efca286371f2bd/etc/keystone.conf.
> sample#L2334-L2342
>
> On Wed, May 24, 2017 at 10:35 AM, Lance Bragstad 

Re: [openstack-dev] Is the pendulum swinging on PaaS layers?

2017-05-26 Thread Chris Friesen

On 05/19/2017 04:06 PM, Dean Troyer wrote:

On Fri, May 19, 2017 at 4:01 PM, Matt Riedemann  wrote:

I'm confused by this. Creating a server takes a volume ID if you're booting
from volume, and that's actually preferred (by nova devs) since then Nova
doesn't have to orchestrate the creation of the volume in the compute
service and then poll until it's available.

Same for ports - nova can create the port (default action) or get a port at
server creation time, which is required if you're doing trunk ports or
sr-iov / fancy pants ports.

Am I misunderstanding what you're saying is missing?


It turns out those are bad examples, they do accept IDs.


I was actually suggesting that maybe these commands in nova should *only* take 
IDs, and that nova itself should not set up either block storage or networking 
for you.


It seems non-intuitive to me that nova will do some basic stuff for you, but if 
you want something more complicated then you need to go do it a totally 
different way.


It seems to me that it'd be more logical if we always set up volumes/ports 
first, then passed the resulting UUIDs to nova.  This could maybe be hidden from 
the end-user by doing it in the client or some intermediate layer, but arguably 
nova proper shouldn't be in the proxying business.


Lastly, the existence of a partial proxy means that people ask for a more 
complete proxy--for example, specifying the vnic_type (for a port) or volume 
type (for a volume) when booting an instance.


Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev