Re: [openstack-dev] [nova] nova cellsv2 and DBs / down cells / quotas

2018-10-24 Thread melanie witt

On Thu, 25 Oct 2018 10:55:15 +1100, Sam Morrison wrote:




On 24 Oct 2018, at 4:01 pm, melanie witt  wrote:

On Wed, 24 Oct 2018 10:54:31 +1100, Sam Morrison wrote:

Hi nova devs,
Have been having a good look into cellsv2 and how we migrate to them (we’re 
still on cellsv1 and about to upgrade to queens and still run cells v1 for now).
One of the problems I have is that now all our nova cell database servers need 
to respond to API requests.
With cellsv1 our architecture was to have a big powerful DB cluster (3 physical 
servers) at the API level to handle the API cell and then a smallish non HA DB 
server (usually just a VM) for each of the compute cells.
This architecture won’t work with cells V2 and we’ll now need to have a lot of 
highly available and responsive DB servers for all the cells.
It will also mean that our nova-apis which reside in Melbourne, Australia will 
now need to talk to database servers in Auckland, New Zealand.
The biggest issue we have is when a cell is down. We sometimes have cells go 
down for an hour or so planned or unplanned and with cellsv1 this does not 
affect other cells.
Looks like some good work going on here 
https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/handling-down-cell
But what about quota? If a cell goes down then it would seem that a user all of 
a sudden would regain some quota from the instances that are in the down cell?
Just wondering if anyone has thought about this?


Yes, we've discussed it quite a bit. The current plan is to offer a policy-driven 
behavior as part of the "down" cell handling which will control whether nova 
will:

a) Reject a server create request if the user owns instances in "down" cells

b) Go ahead and count quota usage "as-is" if the user owns instances in "down" 
cells and allow quota limit to be potentially exceeded

We would like to know if you think this plan will work for you.

Further down the road, if we're able to come to an agreement on a consumer 
type/owner or partitioning concept in placement (to be certain we are counting 
usage our instance of nova owns, as placement is a shared service), we could 
count quota usage from placement instead of querying cells.


OK great, always good to know other people are thinking for you :-) , I don’t 
really like a or b but the idea about using placement sounds like a good one to 
me.


Your honesty is appreciated. :) We do want to get to where we can use 
placement for quota usage. There is a significant amount of higher 
priority placement-related work in flight right now (getting nested 
resource providers working end-to-end, for one) for it to receive 
adequate attention at this moment. We've been discussing it on the spec 
[1] the past few days, if you're interested.



I guess our architecture is pretty unique in a way but I wonder if other people 
are also a little scared about the whole all DB servers need to up to serve API 
requests?


You are not alone. At CERN, they are experiencing the same challenges. 
They too have an architecture where they had deployed less powerful 
database servers in cells and also have cell sites that are located 
geographically far away. They have been driving the "handling of a down 
cell" work.



I’ve been thinking of some hybrid cellsv1/v2 thing where we’d still have the 
top level api cell DB but the API would only ever read from it. Nova-api would 
only write to the compute cell DBs.
Then keep the nova-cells processes just doing instance_update_at_top to keep 
the nova-cell-api db up to date.

We’d still have syncing issues but we have that with placement now and that is 
more frequent than nova-cells-v1 is for us.


I have had similar thoughts, but keep ending up at the syncing/racing 
issues, like you said. I think it's something we'll need to discuss and 
explore more, to see if we can come up with a reasonable way to address 
the increased demand on cell databases as it's been a considerable pain 
point for deployments like yours and CERN's.


Cheers,
-melanie

[1] https://review.openstack.org/509042


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova?

2018-10-24 Thread melanie witt

On Thu, 25 Oct 2018 14:12:51 +0900, ボーアディネシュ[bhor Dinesh] wrote:
We were having a similar use case like *Preemptible Instances* called as 
*Rich-VM’s* which


are high in resources and are deployed each per hypervisor. We have a 
custom code in


production which tracks the quota for such instances separately and for 
the same reason


we have *rich_instances* custom quota class same as *instances* quota class.


Please see the last reply I recently sent on this thread. I have been 
thinking the same as you about how we could use quota classes to 
implement the quota piece of preemptible instances. I think we can 
achieve the same thing using unified limits, specifically registered 
limits [1], which span across all projects. So, I think we are covered 
moving forward with migrating to unified limits and deprecation of quota 
classes. Let me know if you spot any issues with this idea.


Cheers,
-melanie

[1] 
https://developer.openstack.org/api-ref/identity/v3/?expanded=create-registered-limits-detail,create-limits-detail#create-registered-limits





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova?

2018-10-24 Thread melanie witt

On Wed, 24 Oct 2018 12:54:00 -0700, Melanie Witt wrote:

On Wed, 24 Oct 2018 13:57:05 -0500, Matt Riedemann wrote:

On 10/24/2018 10:10 AM, Jay Pipes wrote:

I'd like to propose deprecating this API and getting rid of this
functionality since it conflicts with the new Keystone /limits endpoint,
is highly coupled with RAX's turnstile middleware and I can't seem to
find anyone who has ever used it. Deprecating this API and functionality
would make the transition to a saner quota management system much easier
and straightforward.

I was trying to do this before it was cool:

https://review.openstack.org/#/c/411035/

I think it was the Pike PTG in ATL where people said, "meh, let's just
wait for unified limits from keystone and let this rot on the vine".

I'd be happy to restore and update that spec.


Yeah, we were thinking the presence of the API and code isn't harming
anything and sometimes we talk about situations where we could use them.

Quota classes come up occasionally whenever we talk about preemptible
instances. Example: we could create and use a quota class "preemptible"
and decorate preemptible flavors with that quota_class in order to give
them unlimited quota. There's also talk of quota classes in the "Count
quota based on resource class" spec [1] where we could have leveraged
quota classes to create and enforce quota limits per custom resource
class. But I think the consensus there was to hold off on quota by
custom resource class until we migrate to unified limits and oslo.limit.

So, I think my concern in removing the internal code that is capable of
enforcing quota limit per quota class is the preemptible instance use
case. I don't have my mind wrapped around if/how we could solve it using
unified limits yet.

And I was just thinking, if we added a project_id column to the
quota_classes table and correspondingly added it to the
os-quota-class-sets API, we could pretty simply implement quota by
flavor, which is a feature operators like Oath need. An operator could
create a quota class limit per project_id and then decorate flavors with
quota_class to enforce them per flavor.

I recognize that maybe it would be too confusing to solve use cases with
quota classes given that we're going to migrate to united limits. At the
same time, I'm hesitant to close the door on a possibility before we
have some idea about how we'll solve them without quota classes. Has
anyone thought about how we can solve the use cases with unified limits
for things like preemptible instances and quota by flavor?

[1] https://review.openstack.org/56901


After I sent this, I realized that I _have_ thought about how to solve 
these use cases with unified limits before and commented about it on the 
"Count quota based on resource class" spec some months ago.


For preemptible instances, we could leverage registered limits in 
keystone [2] (registered limits span across all projects) by creating a 
limit with resource_name='preemptible', for example. Then we could 
decorate a flavor with quota_resource_name='preemptible' which would 
designate a preemptible instance type. Then we use the 
quota_resource_name from the flavor to check the quota for the 
corresponding registered limit in keystone. This way, preemptible 
instances can be assigned their own special quota (probably unlimited).


And for quota by flavor, same concept. I think we could use registered 
limits and project limits [3] by creating limits with 
resource_name='flavorX', for example. We could decorate flavors with 
quota_resource_name='flavorX' and check quota for special quota for flavorX.


Unified limits provide all of the same ability as quota classes, as far 
as I can tell. Given that, I think we are OK to deprecate quota classes.


Cheers,
-melanie

[2] 
https://developer.openstack.org/api-ref/identity/v3/?expanded=create-registered-limits-detail,create-limits-detail#create-registered-limits
[3] 
https://developer.openstack.org/api-ref/identit/v3/?expanded=create-registered-limits-detail,create-limits-detail#create-limits





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova?

2018-10-24 Thread ボーアディネシュ [Bhor Dinesh]
Hi All,

We were having a similar use case like *Preemptible Instances* called as 
*Rich-VM’s* which
are high in resources and are deployed each per hypervisor. We have a custom 
code in
production which tracks the quota for such instances separately and for the 
same reason
we have *rich_instances* custom quota class same as *instances* quota class. 

I discussed this thing pretty recently with sean-k-mooney I hope he remembers 
it.


ボーアディネシュ Bhor Dinesh
Verda2チーム

〒160-0022 東京都新宿区新宿4-1-6 JR新宿ミライナタワー 23階
Mobile 08041289520 Fax 03-4316-2116
Email dinesh.b...@linecorp.com

​

-Original Message-
From: "Kevin L. Mitchell"
To: "OpenStack Development Mailing List (not for usage 
questions)"; 
"openstack-operat...@lists.openstack.org";
Cc:
Sent: Oct 25, 2018 (Thu) 11:35:08
Subject: Re: [openstack-dev] [nova][limits] Does ANYONE at all use the quota 
class functionality in Nova?
 
> On 10/24/18 10:10, Jay Pipes wrote:
> > Nova's API has the ability to create "quota classes", which are
> > basically limits for a set of resource types. There is something called
> > the "default quota class" which corresponds to the limits in the
> > CONF.quota section. Quota classes are basically templates of limits to
> > be applied if the calling project doesn't have any stored
> > project-specific limits.

For the record, my original concept in creating quota classes is that
you'd be able to set quotas per tier of user and easily be able to move
users from one tier to another.  This was just a neat idea I had, and
AFAIK, Rackspace never used it, so you can call it YAGNI as far as I'm
concerned :)

> > Has anyone ever created a quota class that is different from "default"?
> >
> > I'd like to propose deprecating this API and getting rid of this
> > functionality since it conflicts with the new Keystone /limits endpoint,
> > is highly coupled with RAX's turnstile middleware

I didn't intend it to be highly coupled, but it's been a while since I
wrote it, and of course I've matured as a developer since then, so
*shrug*.  I also don't think Rackspace has ever used turnstile.

> > and I can't seem to
> > find anyone who has ever used it. Deprecating this API and functionality
> > would make the transition to a saner quota management system much easier
> > and straightforward.

I'm fine with that plan, speaking as the original developer; as I say,
I don't think Rackspace ever utilized the functionality anyway, and if
no one else pipes up saying that they're using it, I'd be all over
deprecating the quota classes in favor of the new hotness.
--
Kevin L. Mitchell 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova?

2018-10-24 Thread Kevin L. Mitchell
> On 10/24/18 10:10, Jay Pipes wrote:
> > Nova's API has the ability to create "quota classes", which are
> > basically limits for a set of resource types. There is something called
> > the "default quota class" which corresponds to the limits in the
> > CONF.quota section. Quota classes are basically templates of limits to
> > be applied if the calling project doesn't have any stored
> > project-specific limits.

For the record, my original concept in creating quota classes is that
you'd be able to set quotas per tier of user and easily be able to move
users from one tier to another.  This was just a neat idea I had, and
AFAIK, Rackspace never used it, so you can call it YAGNI as far as I'm
concerned :)

> > Has anyone ever created a quota class that is different from "default"?
> > 
> > I'd like to propose deprecating this API and getting rid of this
> > functionality since it conflicts with the new Keystone /limits endpoint,
> > is highly coupled with RAX's turnstile middleware 

I didn't intend it to be highly coupled, but it's been a while since I
wrote it, and of course I've matured as a developer since then, so
*shrug*.  I also don't think Rackspace has ever used turnstile.

> > and I can't seem to
> > find anyone who has ever used it. Deprecating this API and functionality
> > would make the transition to a saner quota management system much easier
> > and straightforward.

I'm fine with that plan, speaking as the original developer; as I say,
I don't think Rackspace ever utilized the functionality anyway, and if
no one else pipes up saying that they're using it, I'd be all over
deprecating the quota classes in favor of the new hotness.
-- 
Kevin L. Mitchell 


signature.asc
Description: This is a digitally signed message part
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova?

2018-10-24 Thread Alex Xu
so FYI, in case people missing this spec, there is spec from John
https://review.openstack.org/#/c/602201/3/specs/stein/approved/unified-limits-stein.rst@170

the roadmap of this spec is also saying deprecate the quota-class API.

melanie witt  于2018年10月25日周四 上午3:54写道:

> On Wed, 24 Oct 2018 13:57:05 -0500, Matt Riedemann wrote:
> > On 10/24/2018 10:10 AM, Jay Pipes wrote:
> >> I'd like to propose deprecating this API and getting rid of this
> >> functionality since it conflicts with the new Keystone /limits endpoint,
> >> is highly coupled with RAX's turnstile middleware and I can't seem to
> >> find anyone who has ever used it. Deprecating this API and functionality
> >> would make the transition to a saner quota management system much easier
> >> and straightforward.
> > I was trying to do this before it was cool:
> >
> > https://review.openstack.org/#/c/411035/
> >
> > I think it was the Pike PTG in ATL where people said, "meh, let's just
> > wait for unified limits from keystone and let this rot on the vine".
> >
> > I'd be happy to restore and update that spec.
>
> Yeah, we were thinking the presence of the API and code isn't harming
> anything and sometimes we talk about situations where we could use them.
>
> Quota classes come up occasionally whenever we talk about preemptible
> instances. Example: we could create and use a quota class "preemptible"
> and decorate preemptible flavors with that quota_class in order to give
> them unlimited quota. There's also talk of quota classes in the "Count
> quota based on resource class" spec [1] where we could have leveraged
> quota classes to create and enforce quota limits per custom resource
> class. But I think the consensus there was to hold off on quota by
> custom resource class until we migrate to unified limits and oslo.limit.
>
> So, I think my concern in removing the internal code that is capable of
> enforcing quota limit per quota class is the preemptible instance use
> case. I don't have my mind wrapped around if/how we could solve it using
> unified limits yet.
>
> And I was just thinking, if we added a project_id column to the
> quota_classes table and correspondingly added it to the
> os-quota-class-sets API, we could pretty simply implement quota by
> flavor, which is a feature operators like Oath need. An operator could
> create a quota class limit per project_id and then decorate flavors with
> quota_class to enforce them per flavor.
>
> I recognize that maybe it would be too confusing to solve use cases with
> quota classes given that we're going to migrate to united limits. At the
> same time, I'm hesitant to close the door on a possibility before we
> have some idea about how we'll solve them without quota classes. Has
> anyone thought about how we can solve the use cases with unified limits
> for things like preemptible instances and quota by flavor?
>
> Cheers,
> -melanie
>
> [1] https://review.openstack.org/569011
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] nova cellsv2 and DBs / down cells / quotas

2018-10-24 Thread Sam Morrison


> On 24 Oct 2018, at 4:01 pm, melanie witt  wrote:
> 
> On Wed, 24 Oct 2018 10:54:31 +1100, Sam Morrison wrote:
>> Hi nova devs,
>> Have been having a good look into cellsv2 and how we migrate to them (we’re 
>> still on cellsv1 and about to upgrade to queens and still run cells v1 for 
>> now).
>> One of the problems I have is that now all our nova cell database servers 
>> need to respond to API requests.
>> With cellsv1 our architecture was to have a big powerful DB cluster (3 
>> physical servers) at the API level to handle the API cell and then a 
>> smallish non HA DB server (usually just a VM) for each of the compute cells.
>> This architecture won’t work with cells V2 and we’ll now need to have a lot 
>> of highly available and responsive DB servers for all the cells.
>> It will also mean that our nova-apis which reside in Melbourne, Australia 
>> will now need to talk to database servers in Auckland, New Zealand.
>> The biggest issue we have is when a cell is down. We sometimes have cells go 
>> down for an hour or so planned or unplanned and with cellsv1 this does not 
>> affect other cells.
>> Looks like some good work going on here 
>> https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/handling-down-cell
>> But what about quota? If a cell goes down then it would seem that a user all 
>> of a sudden would regain some quota from the instances that are in the down 
>> cell?
>> Just wondering if anyone has thought about this?
> 
> Yes, we've discussed it quite a bit. The current plan is to offer a 
> policy-driven behavior as part of the "down" cell handling which will control 
> whether nova will:
> 
> a) Reject a server create request if the user owns instances in "down" cells
> 
> b) Go ahead and count quota usage "as-is" if the user owns instances in 
> "down" cells and allow quota limit to be potentially exceeded
> 
> We would like to know if you think this plan will work for you.
> 
> Further down the road, if we're able to come to an agreement on a consumer 
> type/owner or partitioning concept in placement (to be certain we are 
> counting usage our instance of nova owns, as placement is a shared service), 
> we could count quota usage from placement instead of querying cells.

OK great, always good to know other people are thinking for you :-) , I don’t 
really like a or b but the idea about using placement sounds like a good one to 
me.

I guess our architecture is pretty unique in a way but I wonder if other people 
are also a little scared about the whole all DB servers need to up to serve API 
requests?

I’ve been thinking of some hybrid cellsv1/v2 thing where we’d still have the 
top level api cell DB but the API would only ever read from it. Nova-api would 
only write to the compute cell DBs.
Then keep the nova-cells processes just doing instance_update_at_top to keep 
the nova-cell-api db up to date.

We’d still have syncing issues but we have that with placement now and that is 
more frequent than nova-cells-v1 is for us.

Cheers,
Sam



> 
> Cheers,
> -melanie
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ui][tempest][oooq] Refreshing plugins from git

2018-10-24 Thread Honza Pokorny
Here is an etherpad with all of the open patches that Chandan and I have
been working on.

https://etherpad.openstack.org/p/selenium-testing-ci

On 2018-10-22 18:25, Chandan kumar wrote:
> Hello Honza,
> 
> On Thu, Oct 18, 2018 at 6:15 PM Bogdan Dobrelya  wrote:
> >
> > On 10/18/18 2:17 AM, Honza Pokorny wrote:
> > > Hello folks,
> > >
> > > I'm working on the automated ui testing blueprint[1], and I think we
> > > need to change the way we ship our tempest tests.
> > >
> > > Here is where things stand at the moment:
> > >
> > > * We have a kolla image for tempest
> > > * This image contains the tempest rpm, and the openstack-tempest-all rpm
> > > * The openstack-tempest-all package in turn contains all of the
> > >openstack tempest plugins
> > > * Each of the plugins is shipped as an rpm
> > >
> > > So, in order for a new test in tempest-tripleo-ui to appear in CI we
> > > have to go through at least the following tests:
> > >
> > > * New tempest-tripleo-ui rpm
> > > * New openstack-tempest-all rpm
> > > * New tempest kolla image
> > >
> > > This could easily take a week, if not more.
> > >
> > > What I would like to build is something like the following:
> > >
> > > * Add an option to the tempest-setup.sh script in tripleo-quickstart to
> > >refresh all tempest plugins from git before running any tests
> > > * Optionally specify a zuul change for any of the plugins being
> > >refreshed
> > > * Hook up the test job to patches in tripleo-ui (which tests in
> > >tempest-tripleo-ui are testing) so that I can run a fix and its test
> > >in a single CI job
> 
> I have added a patch in TripleO Quickstart extras Validate-tempest
> role: https://review.openstack.org/#/c/612377/ to install any tempest
> plugin from git and zuul will pick
> the specific change in the gates.
> Here is the patch on how to test it with FS: 
> https://review.openstack.org/612386
> Basically in any FS, we can add following lines
> tempest_format: venv
> tempest_plugins_git:
>- 'https://git.openstack.org/openstack/tempest-tripleo-ui.git'
> the respective FS related job will install the tempest plugin and we
> can also use test_white_regex:  to
> trigger the tempest tests.
> 
> I think it will solve the problem.
> 
> Thanks
> 
> Chandan Kumar
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova?

2018-10-24 Thread Lance Bragstad
On Wed, Oct 24, 2018 at 2:49 PM Jay Pipes  wrote:

> On 10/24/2018 02:57 PM, Matt Riedemann wrote:
> > On 10/24/2018 10:10 AM, Jay Pipes wrote:
> >> I'd like to propose deprecating this API and getting rid of this
> >> functionality since it conflicts with the new Keystone /limits
> >> endpoint, is highly coupled with RAX's turnstile middleware and I
> >> can't seem to find anyone who has ever used it. Deprecating this API
> >> and functionality would make the transition to a saner quota
> >> management system much easier and straightforward.
> >
> > I was trying to do this before it was cool:
> >
> > https://review.openstack.org/#/c/411035/
> >
> > I think it was the Pike PTG in ATL where people said, "meh, let's just
> > wait for unified limits from keystone and let this rot on the vine".
> >
> > I'd be happy to restore and update that spec.
>
> ++
>
> I think partly things have stalled out because maybe each side (keystone
> + nova) think the other is working on something but isn't?
>

I have a Post-it on my montior to follow up with what we talked about at
the PTG.

AFAIK, the next steps were to use the examples we went through and apply
them to nova [0] using oslo.limit. We were hoping this would do two things.
First, it would expose any remaining gaps we have in oslo.limit that need
to get closed before other services start using the library. Second, we
could iterate on the example in gerrit as a nova review and making it
easier to merge when it's working.

Is that still the case and if so, how can I help?

[0] https://gist.github.com/lbragstad/69d28dca8adfa689c00b272d6db8bde7

>
> I'm currently working on cleaning up the quota system and would be happy
> to deprecate the os-quota-classes API along with the patch series that
> does that cleanup.
>
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova?

2018-10-24 Thread melanie witt

On Wed, 24 Oct 2018 13:57:05 -0500, Matt Riedemann wrote:

On 10/24/2018 10:10 AM, Jay Pipes wrote:

I'd like to propose deprecating this API and getting rid of this
functionality since it conflicts with the new Keystone /limits endpoint,
is highly coupled with RAX's turnstile middleware and I can't seem to
find anyone who has ever used it. Deprecating this API and functionality
would make the transition to a saner quota management system much easier
and straightforward.

I was trying to do this before it was cool:

https://review.openstack.org/#/c/411035/

I think it was the Pike PTG in ATL where people said, "meh, let's just
wait for unified limits from keystone and let this rot on the vine".

I'd be happy to restore and update that spec.


Yeah, we were thinking the presence of the API and code isn't harming 
anything and sometimes we talk about situations where we could use them.


Quota classes come up occasionally whenever we talk about preemptible 
instances. Example: we could create and use a quota class "preemptible" 
and decorate preemptible flavors with that quota_class in order to give 
them unlimited quota. There's also talk of quota classes in the "Count 
quota based on resource class" spec [1] where we could have leveraged 
quota classes to create and enforce quota limits per custom resource 
class. But I think the consensus there was to hold off on quota by 
custom resource class until we migrate to unified limits and oslo.limit.


So, I think my concern in removing the internal code that is capable of 
enforcing quota limit per quota class is the preemptible instance use 
case. I don't have my mind wrapped around if/how we could solve it using 
unified limits yet.


And I was just thinking, if we added a project_id column to the 
quota_classes table and correspondingly added it to the 
os-quota-class-sets API, we could pretty simply implement quota by 
flavor, which is a feature operators like Oath need. An operator could 
create a quota class limit per project_id and then decorate flavors with 
quota_class to enforce them per flavor.


I recognize that maybe it would be too confusing to solve use cases with 
quota classes given that we're going to migrate to united limits. At the 
same time, I'm hesitant to close the door on a possibility before we 
have some idea about how we'll solve them without quota classes. Has 
anyone thought about how we can solve the use cases with unified limits 
for things like preemptible instances and quota by flavor?


Cheers,
-melanie

[1] https://review.openstack.org/569011




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova?

2018-10-24 Thread Jay Pipes

On 10/24/2018 02:57 PM, Matt Riedemann wrote:

On 10/24/2018 10:10 AM, Jay Pipes wrote:
I'd like to propose deprecating this API and getting rid of this 
functionality since it conflicts with the new Keystone /limits 
endpoint, is highly coupled with RAX's turnstile middleware and I 
can't seem to find anyone who has ever used it. Deprecating this API 
and functionality would make the transition to a saner quota 
management system much easier and straightforward.


I was trying to do this before it was cool:

https://review.openstack.org/#/c/411035/

I think it was the Pike PTG in ATL where people said, "meh, let's just 
wait for unified limits from keystone and let this rot on the vine".


I'd be happy to restore and update that spec.


++

I think partly things have stalled out because maybe each side (keystone 
+ nova) think the other is working on something but isn't?


I'm currently working on cleaning up the quota system and would be happy 
to deprecate the os-quota-classes API along with the patch series that 
does that cleanup.


-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Announcing the First Release of StarlingX, an open source edge computing platform

2018-10-24 Thread Ildiko Vancsa
Hi,

You may have heard, StarlingX[1] is a new independent, top-level, open source 
pilot project that's supported by the OpenStack Foundation. StarlingX joins 
other pilot projects hosted at OpenStack Foundation[2], including Airship, Kata 
Containers and Zuul.

Today the first release of StarlingX is here!

We invite you to participate in getting the word out that the release is ready 
and that we’re eager to welcome more contributors to this project.

Learn more about it:
• Read more about the project at starlingx.io
• Listen to a recording of the onboarding webinar[3]
• On-boarding slide deck[4]
• Overview document[5]

Some things you can share:
• A blog on starlingx.io[6]
• Social sharing: Announcements on Twitter[7]

Want to get involved in the community?
• Mailing Lists[8] 
• Weekly Calls[9]
• Freenode IRC: #starlingx channel[10]

Ready to dive into the code?
• You can get download the first release at git.starlingx.io 
• StarlingX Install Guide[11] 
• StarlingX Developer Guide[12]

If you’re at the Berlin Summit November 13-15[13]:
Tuesday 11/13
• StarlingX – Project update – 6 months in the life of a new Open Source 
project with Brent Rowsell & Dean Troyer
• StarlingX CI, from zero to Zuul with Hazzim Anaya & Elio Martinez

Wednesday 11/14 
• Keynote spotlight on the main stage with Ian Jolliffe & Dean Troyer
• MVP (Minimum Viable Product) architecture for edge - Forum session
• "Ask Me Anything" about StarlingX - Forum session
• StarlingX Enhancements for Edge Networking presentation with Kailun Qin, 
Ruijing Guo & Dan Chen
• Project Onboarding session with Greg Waines
• Integrating IOT Device Management with the Edge Cloud - Forum session

Thursday 11/15
• Containerized Applications' Requirements on Kubernetes Cluster at the Edge - 
Forum session

Check out the materials to learn about the project, try out the software and 
join the community! We hope to see many of you in Berlin!

Ildikó

[1] https://www.starlingx.io/ 
[2] https://www.openstack.org/foundation/
[3] https://www.youtube.com/watch?v=G9uwGnKD6tM&t=232s 
[4] https://www.starlingx.io/collateral/StarlingX-Onboarding-Deck-Web.pdf 
[5] https://www.starlingx.io/collateral/StarlingX_OnePager_Web-102318pdf/
[6] https://www.starlingx.io/blog/starlingx-initial-release.html
[7] https://twitter.com/starlingx
[8] http://lists.starlingx.io/cgi-bin/mailman/listinfo
[9] https://wiki.openstack.org/wiki/Starlingx/Meetings
[10] https://freenode.net/
[11] https://docs.starlingx.io/installation_guide/index.html
[12] https://docs.starlingx.io/developer_guide/index.html
[13] https://www.openstack.org/summit/berlin-2018
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Proposal for a process to keep up with Python releases

2018-10-24 Thread Zane Bitter
There seems to be agreement that this is broadly a good direction to 
pursue, so I proposed a TC resolution. Let's shift discussion to the review:


https://review.openstack.org/613145

cheers,
Zane.

On 19/10/18 11:17 AM, Zane Bitter wrote:
There hasn't been a Python 2 release in 8 years, and during that time 
we've gotten used to the idea that that's the way things go. However, 
with the switch to Python 3 looming (we will drop support for Python 2 
in the U release[1]), history is no longer a good guide: Python 3 
releases drop as often as every year. We are already feeling the pain 
from this, as Linux distros have largely already completed the shift to 
Python 3, and those that have are on versions newer than the py35 we 
currently have in gate jobs.


We have traditionally held to the principle that we want each release to 
support the latest release of CentOS and the latest LTS release of 
Ubuntu, as they existed at the beginning of the release cycle.[2] 
Currently this means in practice one version of py2 and one of py3, but 
in the future it will mean two, usually different, versions of py3.


There are two separate issues that we need to address: unit tests (we'll 
define this as code tested in isolation, within or spawned from within 
the testing process), and integration tests (we'll define this as code 
running in its own process, tested from the outside). I have two 
separate but related proposal for how to handle those.


I'd like to avoid discussion which versions of things we think should be 
supported in Stein in this thread. Let's come up with a process that we 
think is a good one to take into T and beyond, and then retroactively 
apply it to Stein. Competing proposals are of course welcome, in 
addition to feedback on this one.


Unit Tests
--

For unit tests, the most important thing is to test on the versions of 
Python we target. It's less important to be using the exact distro that 
we want to target, because unit tests generally won't interact with 
stuff outside of Python.


I'd like to propose that we handle this by setting up a unit test 
template in openstack-zuul-jobs for each release. So for Stein we'd have 
openstack-python3-stein-jobs. This template would contain:


* A voting gate job for the highest minor version of py3 we want to 
support in that release.
* A voting gate job for the lowest minor version of py3 we want to 
support in that release.

* A periodic job for any interim minor releases.
* (Starting late in the cycle) a non-voting check job for the highest 
minor version of py3 we want to support in the *next* release (if 
different), on the master branch only.


So, for example, (and this is still under active debate) for Stein we 
might have gating jobs for py35 and py37, with a periodic job for py36. 
The T jobs might only have voting py36 and py37 jobs, but late in the T 
cycle we might add a non-voting py38 job on master so that people who 
haven't switched to the U template yet can see what, if anything, 
they'll need to fix.


We'll run the unit tests on any distro we can find that supports the 
version of Python we want. It could be a non-LTS Ubuntu, Fedora, Debian 
unstable, whatever it takes. We won't wait for an LTS Ubuntu to have a 
particular Python version before trying to test it.


Before the start of each cycle, the TC would determine which range of 
versions we want to support, on the basis of the latest one we can find 
in any distro and the earliest one we're likely to need in one of the 
supported Linux distros. There will be a project-wide goal to switch the 
testing template from e.g. openstack-python3-stein-jobs to 
openstack-python3-treasure-jobs for every repo before the end of the 
cycle. We'll have goal champions as usual following up and helping teams 
with the process. We'll know where the problem areas are because we'll 
have added non-voting jobs for any new Python versions to the previous 
release's template.


Integration Tests
-

Integration tests do test, amongst other things, integration with 
non-openstack-supplied things in the distro, so it's important that we 
test on the actual distros we have identified as popular.[2] It's also 
important that every project be testing on the same distro at the end of 
a release, so we can be sure they all work together for users.


When a new release of CentOS or a new LTS release of Ubuntu comes out, 
the TC will create a project-wide goal for the *next* release cycle to 
switch all integration tests over to that distro. It's up to individual 
projects to make the switch for the tests that they own (e.g. it'd be 
the QA team for Tempest, but other individual projects for their own 
jobs). Again, there'll be a goal champion to monitor and follow up.



[1] 
https://governance.openstack.org/tc/resolutions/20180529-python2-deprecation-timeline.html 

[2] 
https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions 




_

Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-24 Thread melanie witt

On Tue, 23 Oct 2018 10:01:42 -0400, Jon Bernard wrote:

* melanie witt  wrote:

On Mon, 22 Oct 2018 11:45:55 +0800 (GMT+08:00), Boxiang Zhu wrote:

I created a new vm and a new volume with type 'ceph'[So that the volume
will be created on one of two hosts. I assume that the volume created on
host dev@rbd-1#ceph this time]. Next step is to attach the volume to the
vm. At last I want to migrate the volume from host dev@rbd-1#ceph to
host dev@rbd-2#ceph, but it failed with the exception
'NotImplementedError(_("Swap only supports host devices")'.

So that, my real problem is that is there any work to migrate
volume(*in-use*)(*ceph rbd*) from one host(pool) to another host(pool)
in the same ceph cluster?
The difference between the spec[2] with my scope is only one is
*available*(the spec) and another is *in-use*(my scope).


[1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/
[2] https://review.openstack.org/#/c/296150


Ah, I think I understand now, thank you for providing all of those details.
And I think you explained it in your first email, that cinder supports
migration of ceph volumes if they are 'available' but not if they are
'in-use'. Apologies that I didn't get your meaning the first time.

I see now the code you were referring to is this [3]:

if volume.status not in ('available', 'retyping', 'maintenance'):
 LOG.debug('Only available volumes can be migrated using backend '
   'assisted migration. Falling back to generic migration.')
 return refuse_to_migrate

So because your volume is not 'available', 'retyping', or 'maintenance',
it's falling back to generic migration, which will end up with an error in
nova because the source_path is not set in the volume config.

Can anyone from the cinder team chime in about whether the ceph volume
migration could be expanded to allow migration of 'in-use' volumes? Is there
a reason not to allow migration of 'in-use' volumes?


Generally speaking, Nova must facilitate the migration of a live (or
in-use) volume.  A volume attached to a running instance requires code
in the I/O path to correctly route traffic to the correct location - so
Cinder must refuse (or defer) a migrate operation if the volume is
attached.  Until somewhat recently Qemu and Libvirt did not support the
migration to non-block (RBD) targets which is the reason for lack of
support.  I believe we now have all of the pieces to perform this
operation successfully, but I suspect it will require a setup with
correct versions of all the related software.  I will try to verify this
during the current release cycle and report back.


OK, thanks for this info, Jon. I'll be interested in your findings.

Cheers,
-melanie




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova?

2018-10-24 Thread Matt Riedemann

On 10/24/2018 10:10 AM, Jay Pipes wrote:
I'd like to propose deprecating this API and getting rid of this 
functionality since it conflicts with the new Keystone /limits endpoint, 
is highly coupled with RAX's turnstile middleware and I can't seem to 
find anyone who has ever used it. Deprecating this API and functionality 
would make the transition to a saner quota management system much easier 
and straightforward.


I was trying to do this before it was cool:

https://review.openstack.org/#/c/411035/

I think it was the Pike PTG in ATL where people said, "meh, let's just 
wait for unified limits from keystone and let this rot on the vine".


I'd be happy to restore and update that spec.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [api] Paste Maintenance

2018-10-24 Thread Chris Dent

On Wed, 24 Oct 2018, Jean-Philippe Evrard wrote:


On Mon, 2018-10-22 at 07:50 -0700, Morgan Fainberg wrote:

Also, doesn't bitbucket have a git interface now too (optionally)?


It does :)
But I think it requires a new repo, so it means that could as well move
to somewhere else like github or openstack infra :p


Right, so that combined with bitbucket oozing surveys and assorted
other annoyances over me has meant that I've moved paste to github:

https://github.com/cdent/paste

I merged some of the outstanding patches, forced Zane to fix up a few
more Python 3.7 related things, fixed up some of the docs and
released a new version (3.0.0) to pypi:

https://pypi.org/p/Paste

And I published the docs (linked from the new release and the repo) to
a new URL on RTD, as older versions of the docs were not something I
was able to adopt:

https://pythonpaste.readthedocs.io

And some travis-ci stuff.

I didn't bother to bring Paste into OpenDev infra because that felt
like that was indicating a longer and more engaged commitment than
it feels responses here indicated should happen. We want to
encourage migration away. As Morgan stated elsewhere in the thread [1]
work is in progress to make using something else easier for people.

If you want to help with Paste, make some issues and pull requests
in the repo above. Thanks.

Next step? paste.deploy (which is a separate repo).

[1] http://lists.openstack.org/pipermail/openstack-dev/2018-October/135937.html

--
Chris Dent   ٩◔̯◔۶   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ClusterLabs Developers] [HA] future of OpenStack OCF resource agents (was: resource-agents v4.2.0)

2018-10-24 Thread Tim Bell
Adam,

Personally, I would prefer the approach where the OpenStack resource agents are 
part of the repository in which they are used. This is also the approach taken 
in other open source projects such as Kubernetes and avoids the inconsistency 
where, for example, Azure resource agents are in the Cluster Labs repository 
but OpenStack ones are not. This can mean that people miss there is OpenStack 
integration available.

This does not reflect, in any way, the excellent efforts and results made so 
far. I don't think it would negate the possibility to include testing in the 
OpenStack gate since there are other examples where code is pulled in from 
other sources. 

Tim

-Original Message-
From: Adam Spiers 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 24 October 2018 at 14:29
To: "develop...@clusterlabs.org" , openstack-dev 
mailing list 
Subject: Re: [openstack-dev] [ClusterLabs Developers] [HA] future of OpenStack 
OCF resource agents (was: resource-agents v4.2.0)

[cross-posting to openstack-dev]

Oyvind Albrigtsen  wrote:
>ClusterLabs is happy to announce resource-agents v4.2.0.
>Source code is available at:
>https://github.com/ClusterLabs/resource-agents/releases/tag/v4.2.0
>
>The most significant enhancements in this release are:
>- new resource agents:

[snipped]

> - openstack-cinder-volume
> - openstack-floating-ip
> - openstack-info

That's an interesting development.

By popular demand from the community, in Oct 2015 the canonical
location for OpenStack-specific resource agents became:

https://git.openstack.org/cgit/openstack/openstack-resource-agents/

as announced here:


http://lists.openstack.org/pipermail/openstack-dev/2015-October/077601.html

However I have to admit I have done a terrible job of maintaining it
since then.  Since OpenStack RAs are now beginning to creep into
ClusterLabs/resource-agents, now seems a good time to revisit this and
decide a coherent strategy.  I'm not religious either way, although I
do have a fairly strong preference for picking one strategy which both
ClusterLabs and OpenStack communities can align on, so that all
OpenStack RAs are in a single place.

I'll kick the bikeshedding off:

Pros of hosting OpenStack RAs on ClusterLabs


- ClusterLabs developers get the GitHub code review and Travis CI
  experience they expect.

- Receive all the same maintenance attention as other RAs - any
  changes to coding style, utility libraries, Pacemaker APIs,
  refactorings etc. which apply to all RAs would automatically
  get applied to the OpenStack RAs too.

- Documentation gets built in the same way as other RAs.

- Unit tests get run in the same way as other RAs (although does
  ocf-tester even get run by the CI currently?)

- Doesn't get maintained by me ;-)

Pros of hosting OpenStack RAs on OpenStack infrastructure
-

- OpenStack developers get the Gerrit code review and Zuul CI
  experience they expect.

- Releases and stable/foo branches could be made to align with
  OpenStack releases (..., Queens, Rocky, Stein, T(rains?)...)

- Automated testing could in the future spin up a full cloud
  and do integration tests by simulating failure scenarios,
  as discussed here:

  https://storyboard.openstack.org/#!/story/2002129

  That said, that is still very much work in progress, so
  it remains to be seen when that could come to fruition.

No doubt I've missed some pros and cons here.  At this point
personally I'm slightly leaning towards keeping them in the
openstack-resource-agents - but that's assuming I can either hand off
maintainership to someone with more time, or somehow find the time
myself to do a better job.

What does everyone else think?  All opinions are very welcome,
obviously.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Use of capslock on the mailing lists

2018-10-24 Thread Anita Kuno

Hello Gentle Reader:

I'm writing to share my thoughts on how I feel when I open my inbox on 
my account subscribed to OpenStack mailing lists.


I've been subscribed to various lists for some time and have 
accommodated my consumption style to suit the broadcast nature of the 
specific lists; use of filters etcetera.


I have noticed a new habit on some of the mailing lists and I find the 
effect of it to feel rather aggressive to me.


I am used to copious amounts of emails and it is my responsibility as 
consumer to filter out and reply to the ones that affect me.


I'm not comfortable with the recent trend of using capslock. I'm feeling 
yelled at by my inbox. This is having the effect of me giving as little 
attention as possible to anyone using capslock.


I wanted to give capslock affectionados that feedback. If you are using 
it to agressively distance yourself from me as a consumer, it is highly 
successful.


Thank you for reading,
Anita

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Registration Prices Increase Today - OpenStack Summit Berlin

2018-10-24 Thread Kendall Waters
Hi everyone,

Friendly reminder that the ticket price for the OpenStack Summit Berlin 
increases today, October 24 at 11:59pm PDT (October 25 at 6:59 UTC). Also, ALL 
registration codes (sponsor, speaker, ATC, AUC) will expire on November 2.

Register now before the price increases!  


Once you have registered, make sure to download the mobile app 
 and plan your 
personal Summit schedule 
. 
Don’t forget to RSVP to intensive trainings 
 as 
this is the only way you will be guaranteed a spot in the room! 

If you have any Summit related questions, please email sum...@openstack.org 
.

Cheers,
Kendall

Kendall Waters
OpenStack Marketing & Events
kend...@openstack.org



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [karbor][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Karbor team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


Karbor is one of the most difficult projects when it comes to describing 
where it fits in the design goals, which may be an indication that we're 
missing something from the vision about the role OpenStack has to play 
in data protection. If that's the case, I'd be very interested in 
hearing what you think that should look like. For now perhaps the 
closest match is with the 'Basic Data Center Management' goal, since 
Karbor is an abstraction for its various plugins, some of which must 
interact with the physical data center to accomplish their work.


Of the other sections, the Interoperability one is probably worth paying 
attention to. Any project which provides access to a lot of different 
vendor plugins always have to balance the desire to expose as much 
functionaility as possible with the need to ensure that applications can 
be ported between OpenStack clouds running different sets of plugins. 
OpenStack places a high value on interoperability, so this is something 
to keep in mind when designing.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [monasca][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Monasca team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


Monasca is a project that has both user-facing and operator-facing 
functions, so it straddles the border of the scope of the vision 
document (which, to be clear, is not the same as the scope of OpenStack 
itself). The user-facing part is covered by the vision, and would 
probably fit under the 'Customisable Integration' design goal. I think 
the design principle for Monasca to be aware of here, as I mentioned at 
the PTG, is that alarms should work in such a way that it is up to the 
user where to direct them to - it could be autoscaling in Heat, 
autoscaling in Senlin, or something else that is completely 
application-specific.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [searchlight][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Searchlight team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


Searchlight is one of the trickier projects to categorise. It's 
difficult to point to any of the listed 'Design Goals' in the document 
and say that Searchlight is contributing directly, although it does 
contribute a search capability to Horizon so arguably you could say it's 
a part of the GUI goal. But I think it is definitely contributing 
indirectly by helping the projects that do fulfill those design goals to 
better meet the requirements laid out in the preceding sections - in 
particular the one about Application Control. As such, I don't think 
there's any danger of this document appearing to exclude Searchlight 
from OpenStack, but it might be the case that we can learn from 
Searchlight and document more explicitly the things that it brings to 
the table as things that OpenStack should be striving for. I'd be 
interested in your thoughts on whether anything is missing.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova?

2018-10-24 Thread Eric Fried
Forwarding to openstack-operators per Jay.

On 10/24/18 10:10, Jay Pipes wrote:
> Nova's API has the ability to create "quota classes", which are
> basically limits for a set of resource types. There is something called
> the "default quota class" which corresponds to the limits in the
> CONF.quota section. Quota classes are basically templates of limits to
> be applied if the calling project doesn't have any stored
> project-specific limits.
> 
> Has anyone ever created a quota class that is different from "default"?
> 
> I'd like to propose deprecating this API and getting rid of this
> functionality since it conflicts with the new Keystone /limits endpoint,
> is highly coupled with RAX's turnstile middleware and I can't seem to
> find anyone who has ever used it. Deprecating this API and functionality
> would make the transition to a saner quota management system much easier
> and straightforward.
> 
> Also, I'm apparently blocked now from the operators ML so could someone
> please forward this there?
> 
> Thanks,
> -jay
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [barbican][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Barbican team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


Barbican provides an abstraction over HSMs and software equivalents 
(like Vault), so the immediate design goal that it meets is the 
'Hardware Virtualisation' one. However, the most interesting part of the 
document for the Barbican team is probably the section on cross-project 
dependencies. In discussions at the PTG, the TC concluded that we 
shouldn't force projects to adopt hard dependencies on other services 
(like Barbican), but recommend that they do so when there is a benefit 
to the user. The challenge here I think is that not duplicating 
security-sensitive code such as secret storage is well known to be 
something that is both of great benefit to the user and highly tempting 
to take a shortcut on. Your feedback on whether we have got the right 
balance is important.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [glance][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Glance team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


There's not a lot to say about Glance specifically in the document. 
Obviously a disk image management service is a fairly fundamental 
component of 'Basic Physical Data Center Management', so it certainly 
fits with the vision.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Keystone team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


Identity management is specifically called out as a key aspect of the 
'Basic Physical Data Center Management' design goal, so obviously 
Keystone fits in there. However, there are other parts of the document 
that can also help provide guidance. One is the last paragraph of the 
'Customisable Integration' goal, which talks about which combinations of 
interactions need to be possible (needs that are currently met by a 
combination of application credentials and trusts), and the importance 
of least-privilege access and credential rotation. Another is the 
section on 'Application Control'. All of this is stuff we have talked 
about in the past so there should be no surprises, but hopefully this 
helps situate it all in the context of the bigger picture.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cinder][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Cinder team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


Clearly Cinder is an integral part of meeting the 'Basic Physical Data 
Center Management' design goal, and also contributes to the 'Hardware 
Virtualisation' goal.


The last paragraph in the 'Plays Well With Others' goal, about providing 
a standalone backend abstraction layer independently of the higher-level 
API (that might include e.g. scheduling and integration with other 
OpenStack services) was added with Cinder in mind, as I know that this 
is something the Cinder community has discussed, and it might also be 
applicable to other projects. Of course this is by no means mandatory, 
but it might be an interesting are to continue exploring.


The Partitioning section highlights the known mismatch between the 
concept of Availability Zones as borrowed from other clouds and the way 
operators use OpenStack, and offers a long-term design direction that 
Cinder might want to pursue in conjunction with Nova.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [manila][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Manila team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


I think that, like Cinder, Manila would qualify as contributing to the 
'Basic Physical Data Center Management' goal, since it also allows users 
to access external storage providers through a standardised API.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Swift team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


The vision puts Swift firmly in-scope as the provider of Infinite, 
Continuous Scaling for data storage. And of course Swift is also part of 
the 'Built-in Reliability and Durability' goal, since it provides 
extremely durable storage and spreads the cost across multiple tenants. 
This is clearly a critical aspect of any cloud, and I'm hopeful this 
exercise will help put to rest a lot of the pointless speculation about 
whether Swift 'really' belongs in OpenStack.


I know y'all have a very data-centric viewpoint on cloud that is 
probably unique in the OpenStack community, so I'm particularly 
interested in any insights you might have to offer on the vision as a 
whole from that perspective.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [cyborg][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Cyborg team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


Cyborg is very obviously a major contributor to the 'Hardware 
Virtualisation' design goal. There's no attempt to make an exhaustive 
list of the types of hardware we want to virtualise, but if anything is 
obviously missing then please suggest it.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Ironic team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


I'd say that Ironic definitely contributes to the 'Basic Physical Data 
Center Management' goal, since it manages physical resources in the data 
center and allows users to access them.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][limits] Does ANYONE at all use the quota class functionality in Nova?

2018-10-24 Thread Jay Pipes
Nova's API has the ability to create "quota classes", which are 
basically limits for a set of resource types. There is something called 
the "default quota class" which corresponds to the limits in the 
CONF.quota section. Quota classes are basically templates of limits to 
be applied if the calling project doesn't have any stored 
project-specific limits.


Has anyone ever created a quota class that is different from "default"?

I'd like to propose deprecating this API and getting rid of this 
functionality since it conflicts with the new Keystone /limits endpoint, 
is highly coupled with RAX's turnstile middleware and I can't seem to 
find anyone who has ever used it. Deprecating this API and functionality 
would make the transition to a saner quota management system much easier 
and straightforward.


Also, I'm apparently blocked now from the operators ML so could someone 
please forward this there?


Thanks,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [designate][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Designate team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


I wrote DNS in as a late addition to the list of systems OpenStack needs 
to interface with for the 'Basic Physical Data Center Management' goal, 
because on reflection it seems essential to any basic physical data 
center that things outside the data center need some way of addressing 
resources running within it. If there's a more generic way of expressing 
that, or if you think Designate would be a better fit with some other 
design goal (whether it's already on the list or not), please let us know.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [octavia][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Octavia team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


I think the main design goal that applies to Octavia is the 'Hardware 
Virtualisation' one, since Octavia provides an API and abstraction layer 
over hardware (and software) load balancers. The 'Customisable 
Integration' goal plays a role too though, because even when a software 
load balancer is used, one advantage of having an OpenStack API for it 
is to allow integration with other OpenStack services (like autoscaling).


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Neutron team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


Neutron pretty obviously falls under the goals of 'Basic Physical Data 
Center Management' and 'Hardware Virtualisation'.


The last paragraph of the 'Plays Well With Others' design goal (about 
offering standalone layers) was prompted by discussions in Cinder. My 
sense is that this is less relevant to Neutron because of the existence 
of OpenDaylight, but it might be something to pay particular attention 
to when reviewing the document.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [qinling][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Qinling team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


Qinling offers perhaps the ultimate in 'Infinite, Continuous Scaling' 
for compute resources, by offering extremely fine-grained variation in 
the capacity utilized; by not reserving any capacity at all but sharing 
it in real time across tenants; and by at least in principle not having 
an upper bound for how big an application can scale without modifying 
its architecture. It also 'Plays Well With Others' by tightly 
integrating the backend components of a FaaS into OpenStack.


Qinling also has a role to play in the 'Customisable Integration' goal, 
since it offers a way for application developers to deploy some glue 
logic in the cloud itself without needing to either pre-allocate a chunk 
of resources (i.e. a VM) to it or to host it outside of the cloud.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Nova team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


The 'Basic Physical Data Center Management' goal was written to 
acknowledge Nova's central role in OpenStack, and emphasize that 
OpenStack differs from projects like Kubernetes in that we don't expect 
something else to manage the physical data center for us; we expect 
OpenStack to be the thing that does that for other projects. Obviously 
Nova is also covered by the 'Hardware Virtualisation' design goal.


The last paragraph of the 'Plays Well With Others' design goal was 
prompted by discussions in Cinder. I don't think the topic of other 
systems using parts of Nova standalone has ever really come up, but if 
it did this might be somewhere to look for guidance. (Note that it's 
phrased as completely optional.)


A couple of the other sections are also (I think) worthy of close 
attention. The principles in the 'Application Control' section of the 
cloud pillars remain important. Nova is a bit unusual in that there are 
a number of auxiliary services that provide functionality here (I'm 
thinking of e.g. Masakari) - which is good, but it means more things to 
think about. Not only whether any given functionality is needed, but 
whether it is best provided by Nova or some other project, and if the 
latter how Nova can provide affordances for that project to integrate 
with it.


The Partitioning section was suggested by Jay. It highlights the known 
mismatch between the concept of Availability Zones as borrowed from 
other clouds and the way operators use OpenStack, and offers a long-term 
design direction without being prescriptive.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [zun][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Zun team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


Zun seems to fit nicely with the 'Infinite, Continuous Scaling' design 
goal, since it allows users to scale their applications and share 
physical resources at a more fine-grained level than a VM. I'm not 
actually up to date with the details under the hood, but from reading 
the docs it looks like it would also be doing Basic Physical Data Center 
Management - effectively doing what Nova does except with containers 
instead of VMs. And the future plans to integrate with Kubernetes also 
fit with the 'Plays Well With Others" design goal. I'm looking forward 
to your feedback on all of those areas, and I hope that the rest of the 
principles articulated in the vision will prove helpful to you as you 
make design decisions.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [zaqar][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Zaqar team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


The two design goals that Zaqar contributes to are 'Infinite, Continuous 
Scaling' and 'Built-in Reliability and Durability'. It allows 
application developers to do asynchronous messaging and have the scaling 
handled by the cloud, so they can send as many or as few messages as 
they need without having to scale in VM-sized chunks. And it offers 
reliable at-least-once delivery, so application developers can rely on 
the cloud to provide that, simplifying the fault tolerance requirements 
for the rest of the application.


Of course Zaqar can also fulfill a valuable role carrying messages from 
the OpenStack services to the application. This capability will be 
critical to achieving the ideals outlined in the 'Application Control' 
section, since delivery of event notifications from the cloud services 
to the application should be both asynchronous (the cloud can't wait for 
a user application) and reliable (so some sort of queuing is required).


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [blazar][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Blazar team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


Blazar is one of the most interesting projects when it comes to defining 
a vision for OpenStack clouds, because it has a really well-defined set 
of goals around energy efficiency and capacity planning that we've so 
far failed to capture in the document. In the 'Self-Service' section we 
talk about aligning user charges with operators' opportunity costs, 
which hints at the leasing concept but seems incomplete without a 
discussion about capacity planning. Similarly, we talk in various place 
about reducing costs to users by sharing resources across tenants, but 
not about how to physically pack those resources to minimise the costs 
to operators. I would really value the Blazar team's input on where and 
how best to introduce these concepts into the vision.


As far as what we have already goes, I think the compute host 
reservation part of Blazar definitely qualifies as part of 'Basic 
Physical Data Center Management' since it's about optimally managing 
physical resources in the data center. Arguably the VM reservation part 
could too, as something that effectively augments Nova, but it's more of 
a stretch which makes me wonder if there's something missing.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [senlin][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Senlin team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


As for Heat, I think the most important of the design goals for Senlin 
is the Customisable Integration one. Senlin is already designed around 
this concept, with Receivers that have webhook URLs allowing users to 
wire alarms for any source together with autoscaling in whatever way 
they like. However, even more important than that is the way that Senlin 
helps the other services deliver on the 'Application Control' pillar, by 
helping applications manage their own infrastructure from within the 
cloud itself.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [heat][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Heat team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


I think the most relevant design goal here for Heat is the one on 
Customisable Integration. This definitely has implications for how Heat 
designs things - for example, Heat follows these guidelines with its 
autoscaling implementation, by providing a webhook URL that can be used 
for scaling up and down and allowing users to wire it to either Aodh, 
Monasca, or some other thing (possibly of their own design). But beyond 
that, Heat is the service that actually provides the wiring, not only 
for itself but for all of OpenStack. When users want to connect 
resources from different services together, much of the time they'll be 
doing so using the declarative model of a Heat template.


The sections on Interoperability and Bidirectional Compatibility should 
also be important considerations when making design decisions, since 
Heat templates should help provide interoperability across clouds. The 
Cross-Project Dependencies section is also likely of interest, since 
several projects rely on Heat, and in fact in the distant past the TC 
used to require this, but that is no longer the case either in practice 
or in the document as proposed. Finally, the section on Application 
Control mentions the importance of allowing applications to authenticate 
securely to the cloud, which is something Heat has put a lot of work 
into and run into a lot of problems with. My hope is that this document 
will help to spread that focus further in other parts of OpenStack so 
that this kind of thing gets easier over time.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [telemetry][aodh][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Telemetry team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


The scope of the document (which doesn't attempt to cover the whole 
scope of OpenStack) is user-facing services, so within the Telemetry 
stable I think that means mostly just Aodh at this point? The most 
relevant design goal is probably 'Customisable Integration'. This 
section emphasises the importance of allowing users to connect alarms to 
whatever they wish - from other OpenStack services to something 
application-specific. With its support for arbitrary webhooks and 
optional trust-token authentication on outgoing alarms, Aodh is already 
doing a very good job with this.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Mistral team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


I see Mistral contributing to two of the design goals. First, it helps 
with Customisable Integration by enabling application developers to 
incorporate glue logic between cloud services or between the application 
and cloud services, and host it in the cloud without the need to 
pre-allocate a VM for it. Secondly, it also contributes to the Built-in 
Reliability and Durability goal by providing applications with a 
highly-reliable way of maintaining workflow state without the need for 
the application itself to do it.


The sections on Bidirectional Compatibility and Interoperability will 
probably be relevant to design decisions in Mistral, since workbooks are 
one of the artifact types that I'd expect to help with interoperability 
across clouds. The Cross-Project Dependencies section may also be of 
special interest to review, since Mistral is a service that many other 
OpenStack services could potentially rely on.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [masakari][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Masakari team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


In my view, Masakari's role in terms of the design goals is to augment 
Nova (which obviously fits in the Basic Physical Data Center Management 
and Hardware Virtualisation goals) to improve its compliance with the 
section on Application Control of the infrastructure. Without Masakari 
there's no good way for an application to be notified about events like 
failure of a VM or hypervisor, and no way to perform some of the 
recovery actions.


The section on Customisable Integration states that we place a lot of 
value on allowing users and applications to configure how they want to 
handle events (including events like failures) rather than acting 
automatically, because every application's requirements are unique. This 
is probably going to be a valuable thing to keep in mind when making 
design decisions in Masakari.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [solum][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Solum team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


As I understand it, Solum's goal is to provide native OpenStack 
integration for PaaS layers, so it would be covered by the 'Plays Well 
With Others' design goal.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [freezer][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Freezer team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


For the purposes of this document we can largely ignore the Freezer 
guest agent, because we're only looking at cloud services. (To be clear, 
this doesn't mean the guest agent is outside the scope of OpenStack, 
just that it doesn't need to be covered by the vision document.) It 
appears to me that the Freezer API is targeting the 'Built-in 
Reliability and Durability' design goal: it provides a way to e.g. 
reliably trigger and generally manage the backup process, and by making 
it a cloud service the cost of providing that can be spread across 
multiple tenants. But it may be that we should also say something more 
specific about OpenStack's role in data protection. Perhaps y'all could 
work with the Karbor team to figure out what.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [murano][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Murano team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


To be honest, nothing in the document we have so far really captures the 
scope and ambition of the original vision behind Murano. You could say 
that it fulfils a similar role to Heat in meeting the Customisable 
Integration goal by being one of the components that users can use to 
wire the various servies the OpenStack offers together into a coherent 
application, and functionally that would be a pretty accurate 
description. But nothing in there suggests that we want OpenStack to 
produce a standard packaging format for cloud application components or 
a marketplace where they can be published. Is that still part of the 
vision for Murano after the closure of the application catalog? Is it 
something that should be explicitly part of the vision for OpenStack 
clouds? If so, what should that look like?


The sections on Interoperability and Bidirectional Compatibility 
formalise what are already important design considerations for Murano, 
since one goal of its packaging format is obviously to provide 
interoperability across clouds.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [sahara][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Sahara team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


I wrote the 'Abstract Specialised Operations' design goal to 
specifically to cover Sahara and Trove. (As you can see, I was really 
struggling to find a good, generic name for the principle; better 
suggestions are welcome.) I think this is a decent explanation for why 
Hadoop-as-a-Service should be in OpenStack, but I am by no means an 
expert so I would really like to hear the Sahara team's perspective on it.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Horizon team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


We have said that other kinds of user interface, e.g. the CLI, are out 
of scope for the document (though not for OpenStack, of course!). After 
some discussion, we decided that Horizon being a service was more 
important to its categorisation than it being a user interface, so I 
wrote the Graphical User Interface design goal to ensure that it is 
covered. However, I'm sure y'all have spent much more time thinking 
about what Horizon contributes to OpenStack than I, so your feedback and 
suggestions are needed.


That is not the only way in which I think this document is relevant to 
the Horizon team: one of my goals with the exercise is to encourage the 
service projects to make sure their APIs make all of the 
operationally-relevant information available and legible to 
applications. That would include e.g. surfacing events, which I know is 
something that Horizon has wanted for a long time, and hopefully this 
will lead to easier ways to build a GUI without as much polling.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [trove][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Trove team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


I wrote the 'Abstract Specialised Operations' design goal to 
specifically to cover Trove (and Sahara). (As you can see, I was really 
struggling to find a good, generic name for the principle; better 
suggestions are welcome.) This is the best explanation I could think of 
to explain why it's important to have a DBaaS in OpenStack, even if it 
only scales at a coarse granularity (as opposed to a DynamoDB-style 
service like MagnetoDB was, which would be a natural fit for the 
'Infinite, Continuous Scaling' design goal). However, the Trove team 
might well have a different perspective on why Trove is important to 
OpenStack, so I would very much like to hear your feedback and suggestions.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [magnum][tc] Seeking feedback on the OpenStack cloud vision

2018-10-24 Thread Zane Bitter

Greetings, Magnum team!
As you may be aware, I've been working with other folks in the community 
on documenting a vision for OpenStack clouds (formerly known as the 
'Technical Vision') - essentially to interpret the mission statement in 
long-form, in a way that we can use to actually help guide decisions. 
You can read the latest draft here: https://review.openstack.org/592205


We're trying to get feedback from as many people as possible - in many 
ways the value is in the process of coming together to figure out what 
we're trying to achieve as a community with OpenStack and how we can 
work together to build it. The document is there to help us remember 
what we decided so we don't have to do it all again over and over.


The vision is structured with two sections that apply broadly to every 
project in OpenStack - describing the principles that we believe are 
essential to every cloud, and the ones that make OpenStack different 
from some other clouds. The third section is a list of design goals that 
we want OpenStack as a whole to be able to meet - ideally each project 
would be contributing toward one or more of these design goals.


Magnum would fall under the 'Plays Well With Others' design goal, as 
it's one way of integrating OpenStack with Kubernetes, ensuring that 
OpenStack users have access to container orchestration tools. And it's 
also an example (along with Sahara and Trove) of the 'Abstract 
Specialised Operations' goal, since it allows operators to have a 
centralised team of Kubernetes cluster operators to serve multiple tenants.


If you would like me or another TC member to join one of your team IRC 
meetings to discuss further what the vision means for your team, please 
reply to this thread to set it up. You are also welcome to bring up any 
questions in the TC IRC channel, #openstack-tc - there's more of us 
around during Office Hours 
(https://governance.openstack.org/tc/#office-hours), but you can talk to 
us at any time.


Feedback can also happen either in this thread or on the review 
https://review.openstack.org/592205


If the team is generally happy with the vision as it is and doesn't have 
any specific feedback, that's cool but I'd like to request that at least 
the PTL leave a vote on the review. It's important to know whether we 
are actually developing a consensus in the community or just talking to 
ourselves :)


many thanks,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO][OVN] Switching the default network backend to ML2/OVN

2018-10-24 Thread Daniel Alvarez Sanchez
Hi Stackers!

The purpose of this email is to share with the community the intention
of switching the default network backend in TripleO from ML2/OVS to
ML2/OVN by changing the mechanism driver from openvswitch to ovn. This
doesn’t mean that ML2/OVS will be dropped but users deploying
OpenStack without explicitly specifying a network driver will get
ML2/OVN by default.

OVN in Short
==

Open Virtual Network is managed under the OVS project, and was created
by the original authors of OVS. It is an attempt to re-do the ML2/OVS
control plane, using lessons learned throughout the years. It is
intended to be used in projects such as OpenStack and Kubernetes. OVN
has a different architecture, moving us away from Python agents
communicating with the Neutron API service via RabbitMQ to daemons
written in C communicating via OpenFlow and OVSDB.

OVN is built with a modern architecture that offers better foundations
for a simpler and more performant solution. What does this mean? For
example, at Red Hat we executed some preliminary testing during the
Queens cycle and found significant CPU savings due to OVN not using
RabbitMQ (CPU utilization during a Rally scenario using ML2/OVS [0] or
ML2/OVN [1]). Also, we tested API performance and found out that most
of the operations are significantly faster with ML2/OVN. Please see
more details in the FAQ section.

Here’s a few useful links about OpenStack’s integration of OVN:

* OpenStack Boston Summit talk on OVN [2]
* OpenStack networking-ovn documentation [3]
* OpenStack networking-ovn code repository [4]

How?


The goal is to merge this patch [5] during the Stein cycle which
pursues the following actions:

1. Switch the default mechanism driver from openvswitch to ovn.
2. Adapt all jobs so that they use ML2/OVN as the network backend.
3. Create legacy environment file for ML2/OVS to allow deployments based on it.
4. Flip scenario007 job from ML2/OVN to ML2/OVS so that we continue testing it.
5. Continue using ML2/OVS in the undercloud.
6. Ensure that updates/upgrades from ML2/OVS don’t break and don’t
switch automatically to the new default. As some parity gaps exist
right now, we don’t want to change the network backend automatically.
Instead, if the user wants to migrate from ML2/OVS to ML2/OVN, we’ll
provide an ansible based tool that will perform the operation.
More info and code at [6].

Reviews, comments and suggestions are really appreciated :)


FAQ
===

Can you talk about the advantages of OVN over ML2/OVS?
---

If asked to describe the ML2/OVS control plane (OVS, L3, DHCP and
metadata agents using the messaging bus to sync with the Neutron API
service) one would not tend to use the term ‘simple’. There is liberal
use of a smattering of Linux networking technologies such as:
* iptables
* network namespaces
* ARP manipulation
* Different forms of NAT
* keepalived, radvd, haproxy, dnsmasq
* Source based routing,
* … and of course OVS flows.

OVN simplifies this to a single process running on compute nodes, and
another process running on centralized nodes, communicating via OVSDB
and OpenFlow, ultimately setting OVS flows.

The simplified, new architecture allows us to re-do features like DVR
and L3 HA in more efficient and elegant ways. For example, L3 HA
failover is faster: It doesn’t use keepalived, rather OVN monitors
neighbor tunnel endpoints. OVN supports enabling both DVR and L3 HA
simultaneously, something we never supported with ML2/OVS.

We also found out that not depending on RPC messages for agents
communication brings a lot of benefits. From our experience, RabbitMQ
sometimes represents a bottleneck and it can be very intense when it
comes to resources utilization.


What about the undercloud?
--

ML2/OVS will be still used in the undercloud as OVN has some
limitations with regards to baremetal provisioning mainly (keep
reading about the parity gaps). We aim to convert the undercloud to
ML2/OVN to provide the operator a more consistent experience as soon
as possible.

It would be possible however to use the Neutron DHCP agent in the
short term to solve this limitation but in the long term we intend to
implement support for baremetal provisioning in the OVN built-in DHCP
server.


What about CI?
-

* networking-ovn has:
* Devstack based Tempest (API, scenario from Tempest and Neutron
Tempest plugin) against the latest released OVS version, and against
OVS master (thus also OVN master)
* Devstack based Rally
* Grenade
* A multinode, container based TripleO job that installs and issues a
basic VM connectivity scenario test
* Supports Python 3 and 2
* TripleO has currently OVN enabled in one quickstart featureset (fs30).

Are there any known parity issues with ML2/OVS?
---

* OVN supports VLAN provider networks, but not VLAN tenant networks.
This wil

Re: [openstack-dev] [Release-job-failures] Release of openstack-infra/shade failed

2018-10-24 Thread Sean McGinnis
On Wed, Oct 24, 2018 at 12:08 AM Tony Breeds 
wrote:

> On Wed, Oct 24, 2018 at 03:23:53AM +, z...@openstack.org wrote:
> > Build failed.
> >
> > - release-openstack-python3
> http://logs.openstack.org/ab/abac67d7bb347e1caba4d74c81712de86790316b/release/release-openstack-python3/e84da68/
> : POST_FAILURE in 2m 18s
>
> So this failed because pypi thinks there was a name collision[1]:
>  HTTPError: 400 Client Error: File already exists. See
> https://pypi.org/help/#file-name-reuse for url:
> https://upload.pypi.org/legacy/
>
> AFACIT the upload was successful:
>
> shade-1.27.2-py2-none-any.whl  :
> 2018-10-24T03:20:00
> d30a230461ba276c8bc561a27e61dcfd6769ca00bb4c652a841f7148a0d74a5a
> shade-1.27.2-py2.py3-none-any.whl  :
> 2018-10-24T03:20:11
> 8942b56d7d02740fb9c799a57f0c4ff13d300680c89e6f04dadb5eaa854e1792
> shade-1.27.2.tar.gz:
> 2018-10-24T03:20:04
> ebf40040b892f3e9bd4229fd05fff7ea24a08c51e46b7f2d8b3901ce34f51cbf
>
> The strange thing is that the tar.gz was uploaded *befoer* the wheel
> even though our publish jobs explictly do it in the other order and the
> timestamp of the tar.gz doesn't match the error message.
>
> SO I think we have a bug somewhere, more digging tomorrow
>
> Yours Tony.
>

Looks like this was another case of conflicting jobs. This still has both
release-openstack-python3
and release-openstack-python jobs running, so I think it ended up being a
race between the two of
which one got to pypi first.

I think the "fix" is to get the release-openstack-python out of there now
that we are able to run the
Python 3 version.

On the plus side, all of the subsequent jobs passed, so the package is
published, the announcement
went out, and the requirements update patch was generated.


> [1]
> http://logs.openstack.org/ab/abac67d7bb347e1caba4d74c81712de86790316b/release/release-openstack-python3/e84da68/job-output.txt.gz#_2018-10-24_03_20_15_264676
> ___
> Release-job-failures mailing list
> release-job-failu...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures] Release of openstack-infra/shade failed

2018-10-24 Thread Jeremy Stanley
On 2018-10-24 16:08:26 +1100 (+1100), Tony Breeds wrote:
> On Wed, Oct 24, 2018 at 03:23:53AM +, z...@openstack.org wrote:
> > Build failed.
> > 
> > - release-openstack-python3 
> > http://logs.openstack.org/ab/abac67d7bb347e1caba4d74c81712de86790316b/release/release-openstack-python3/e84da68/
> >  : POST_FAILURE in 2m 18s
> 
> So this failed because pypi thinks there was a name collision[1]:
>  HTTPError: 400 Client Error: File already exists. See 
> https://pypi.org/help/#file-name-reuse for url: 
> https://upload.pypi.org/legacy/
> 
> AFACIT the upload was successful:
> 
> shade-1.27.2-py2-none-any.whl  : 2018-10-24T03:20:00 
> d30a230461ba276c8bc561a27e61dcfd6769ca00bb4c652a841f7148a0d74a5a
> shade-1.27.2-py2.py3-none-any.whl  : 2018-10-24T03:20:11 
> 8942b56d7d02740fb9c799a57f0c4ff13d300680c89e6f04dadb5eaa854e1792
> shade-1.27.2.tar.gz: 2018-10-24T03:20:04 
> ebf40040b892f3e9bd4229fd05fff7ea24a08c51e46b7f2d8b3901ce34f51cbf
[...]

I think PyPI is right. Note the fact that there are not two but
*three* artifacts there. We shouldn't be building both a py2 and
py2.py3 wheel. The log you linked is uploaded
shade-1.27.2-py2.py3-none-any.whl and tried (but failed) to upload
shade-1.27.2.tar.gz. So where did shade-1.27.2-py2-none-any.whl come
from then? Hold onto your hats folks:

http://logs.openstack.org/ab/abac67d7bb347e1caba4d74c81712de86790316b/release/release-openstack-python/f38f2b9/job-output.txt.gz#_2018-10-24_03_20_02_134223

I suppose we don't expect a project to run both the
release-openstack-python and release-openstack-python3 jobs on the
same tags.
-- 
Jeremy Stanley


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][upgrade-checkers] Week R-25 Update

2018-10-24 Thread Ben Nemec



On 10/23/18 9:55 PM, Adrian Turjak wrote:


On 24/10/18 2:09 AM, Ben Nemec wrote:



On 10/22/18 5:40 PM, Matt Riedemann wrote:

On 10/22/2018 4:35 PM, Adrian Turjak wrote:

The one other open question I have is about the Adjutant change [2]. I
know Adjutant is very new and I'm not sure what upgrades look like for
that project, so I don't really know how valuable adding the upgrade
check framework is to that project. Is it like Horizon where it's
mostly stateless and fed off plugins? Because we don't have an upgrade
check CLI for Horizon for that reason.

[1]
https://review.openstack.org/#/q/topic:upgrade-checkers+(status:open+OR+status:merged)

[2]https://review.openstack.org/#/c/611812/


Adjutant's codebase is also going to be a bit unstable for the next few
cycles while we refactor some internals (we're not marking it 1.0 yet).
Once the current set of ugly refactors planned for late Stein are
done I
may look at building some upgrade checking, once we also work out what
out upgrade checking should look like. Probably mostly checking config
changes, database migration states, and plugin compatibility.

Adjutant already has a concept of startup checks at least, which while
not anywhere near as extensive as they should be, mostly amount to
making sure your config file looks 'mostly' sane regarding plugins
before starting up the service, and we do intend to expand on that,
plus
we can reuse a large chunk of that for upgrade checking.


OK it seems there is not really any point in trying to satisfy the
upgrade checkers goal for Adjutant in Stein then. Should we just
abandon the change?



Can't we just add a noop command like we are for the services that
don't currently need upgrade checks?



I mostly was responding to this in the review itself rather than on here.

We are probably going to have reason for an upgrade check in Adjutant,
my main gripe is, Adjutant is Django based and there isn't a good point
in adding a separate cli when we already expose 'adjutant-api' as a
proxy to manage.py and as such we should just register the upgrade check
as a custom Django admin command.

More so because all of the logic needed to actually run the check in
future will require Django settings to be configured. We don't actually
use any oslo libraries yet so the current code for the check doesn't
actually make sense in context.

I'm fine with a noop check, but we have to make it fit.


What I'm trying to avoid is creating any snowflake upgrade processes. It 
may not make sense for Adjutant to do this in isolation, but Adjutant 
doesn't exist in isolation. Also, if I understand correctly, you're 
proposing to add startup checks instead of upgrade checks. The downside 
I see there is that you have to have already restarted the service 
before the check runs so if there's a problem now you have downtime. 
With a standalone upgrade check you can run the check while the old 
version of the code is still running. If problems are found you fix them 
before doing the restart.


That said, I don't particularly care how the upgrade check is 
implemented. If 'adjutant-status upgrade check' just calls 'adjutant-api 
--check' or something else that returns 0 or non-0 appropriately that 
satisfies me. I don't want to cross the line into foolish consistency 
either. :-)


-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][NFS] Inexplicable utime permission denied when launching instance

2018-10-24 Thread Neil Jerram
Thanks so much for these hints, Erlon.  I will look closer at AppArmor.

Neil

On Wed, Oct 24, 2018 at 1:41 PM Erlon Cruz  wrote:
>
> PS. Don't forget that if you change or disable AppArmor you will have to 
> reboot the host so the kernel gets reloaded.
>
> Em qua, 24 de out de 2018 às 09:40, Erlon Cruz  escreveu:
>>
>> I think that there's a change that AppArmor is blocking the access. Have you 
>> checked the dmesg messages related with apparmor?
>>
>> Em sex, 19 de out de 2018 às 09:38, Neil Jerram  escreveu:
>>>
>>> Wracking my brains over this one, would appreciate any pointers...
>>>
>>> Setup: Small test deployment with just 3 compute nodes, Queens on Ubuntu 
>>> Bionic. The first compute node is an NFS server for 
>>> /var/lib/nova/instances, and the other compute nodes mount that as NFS 
>>> clients.
>>>
>>> Problem: Sometimes, when launching an instance which is scheduled to one of 
>>> the client nodes, nova-compute (in imagebackend.py) gets Permission Denied 
>>> (errno 13) when calling utime to touch the timestamp on the instance file.
>>>
>>> Through various bits of debugging and hackery, I've established that:
>>>
>>> - it looks like the problem never occurs when this is the call that 
>>> bootstraps the privsep setup; but it does occur quite frequently on later 
>>> calls
>>>
>>> - when the problem occurs, retrying doesn't help (5 times, with 0.5s in 
>>> between)
>>>
>>> - the instance file does exist, and is owned by root with read/write 
>>> permission for root
>>>
>>> - the privsep helper is running as root
>>>
>>> - the privsep helper receives and executes the request - so it's not a 
>>> problem with communication between nova-compute and the helper
>>>
>>> - root is uid 0 on both NFS server and client
>>>
>>> - NFS setup does not have the root_squash option
>>>
>>> - there is some AppArmor setup, on both client and server, and I haven't 
>>> yet worked out whether that might be relevant.
>>>
>>> Any ideas?
>>>
>>> Many thanks,
>>>   Neil
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

2018-10-24 Thread Jay S. Bryant



On 10/23/2018 9:01 AM, Jon Bernard wrote:

* melanie witt  wrote:

On Mon, 22 Oct 2018 11:45:55 +0800 (GMT+08:00), Boxiang Zhu wrote:

I created a new vm and a new volume with type 'ceph'[So that the volume
will be created on one of two hosts. I assume that the volume created on
host dev@rbd-1#ceph this time]. Next step is to attach the volume to the
vm. At last I want to migrate the volume from host dev@rbd-1#ceph to
host dev@rbd-2#ceph, but it failed with the exception
'NotImplementedError(_("Swap only supports host devices")'.

So that, my real problem is that is there any work to migrate
volume(*in-use*)(*ceph rbd*) from one host(pool) to another host(pool)
in the same ceph cluster?
The difference between the spec[2] with my scope is only one is
*available*(the spec) and another is *in-use*(my scope).


[1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/
[2] https://review.openstack.org/#/c/296150

Ah, I think I understand now, thank you for providing all of those details.
And I think you explained it in your first email, that cinder supports
migration of ceph volumes if they are 'available' but not if they are
'in-use'. Apologies that I didn't get your meaning the first time.

I see now the code you were referring to is this [3]:

if volume.status not in ('available', 'retyping', 'maintenance'):
 LOG.debug('Only available volumes can be migrated using backend '
   'assisted migration. Falling back to generic migration.')
 return refuse_to_migrate

So because your volume is not 'available', 'retyping', or 'maintenance',
it's falling back to generic migration, which will end up with an error in
nova because the source_path is not set in the volume config.

Can anyone from the cinder team chime in about whether the ceph volume
migration could be expanded to allow migration of 'in-use' volumes? Is there
a reason not to allow migration of 'in-use' volumes?

Generally speaking, Nova must facilitate the migration of a live (or
in-use) volume.  A volume attached to a running instance requires code
in the I/O path to correctly route traffic to the correct location - so
Cinder must refuse (or defer) a migrate operation if the volume is
attached.  Until somewhat recently Qemu and Libvirt did not support the
migration to non-block (RBD) targets which is the reason for lack of
support.  I believe we now have all of the pieces to perform this
operation successfully, but I suspect it will require a setup with
correct versions of all the related software.  I will try to verify this
during the current release cycle and report back.

Jon,

Thanks for the explanation and investigation!

Jay


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][NFS] Inexplicable utime permission denied when launching instance

2018-10-24 Thread Erlon Cruz
PS. Don't forget that if you change or disable AppArmor you will have to
reboot the host so the kernel gets reloaded.

Em qua, 24 de out de 2018 às 09:40, Erlon Cruz 
escreveu:

> I think that there's a change that AppArmor is blocking the access. Have
> you checked the dmesg messages related with apparmor?
>
> Em sex, 19 de out de 2018 às 09:38, Neil Jerram  escreveu:
>
>> Wracking my brains over this one, would appreciate any pointers...
>>
>> Setup: Small test deployment with just 3 compute nodes, Queens on Ubuntu
>> Bionic. The first compute node is an NFS server for
>> /var/lib/nova/instances, and the other compute nodes mount that as NFS
>> clients.
>>
>> Problem: Sometimes, when launching an instance which is scheduled to one
>> of the client nodes, nova-compute (in imagebackend.py) gets Permission
>> Denied (errno 13) when calling utime to touch the timestamp on the instance
>> file.
>>
>> Through various bits of debugging and hackery, I've established that:
>>
>> - it looks like the problem never occurs when this is the call that
>> bootstraps the privsep setup; but it does occur quite frequently on later
>> calls
>>
>> - when the problem occurs, retrying doesn't help (5 times, with 0.5s in
>> between)
>>
>> - the instance file does exist, and is owned by root with read/write
>> permission for root
>>
>> - the privsep helper is running as root
>>
>> - the privsep helper receives and executes the request - so it's not a
>> problem with communication between nova-compute and the helper
>>
>> - root is uid 0 on both NFS server and client
>>
>> - NFS setup does not have the root_squash option
>>
>> - there is some AppArmor setup, on both client and server, and I haven't
>> yet worked out whether that might be relevant.
>>
>> Any ideas?
>>
>> Many thanks,
>>   Neil
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][NFS] Inexplicable utime permission denied when launching instance

2018-10-24 Thread Erlon Cruz
I think that there's a change that AppArmor is blocking the access. Have
you checked the dmesg messages related with apparmor?

Em sex, 19 de out de 2018 às 09:38, Neil Jerram  escreveu:

> Wracking my brains over this one, would appreciate any pointers...
>
> Setup: Small test deployment with just 3 compute nodes, Queens on Ubuntu
> Bionic. The first compute node is an NFS server for
> /var/lib/nova/instances, and the other compute nodes mount that as NFS
> clients.
>
> Problem: Sometimes, when launching an instance which is scheduled to one
> of the client nodes, nova-compute (in imagebackend.py) gets Permission
> Denied (errno 13) when calling utime to touch the timestamp on the instance
> file.
>
> Through various bits of debugging and hackery, I've established that:
>
> - it looks like the problem never occurs when this is the call that
> bootstraps the privsep setup; but it does occur quite frequently on later
> calls
>
> - when the problem occurs, retrying doesn't help (5 times, with 0.5s in
> between)
>
> - the instance file does exist, and is owned by root with read/write
> permission for root
>
> - the privsep helper is running as root
>
> - the privsep helper receives and executes the request - so it's not a
> problem with communication between nova-compute and the helper
>
> - root is uid 0 on both NFS server and client
>
> - NFS setup does not have the root_squash option
>
> - there is some AppArmor setup, on both client and server, and I haven't
> yet worked out whether that might be relevant.
>
> Any ideas?
>
> Many thanks,
>   Neil
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ClusterLabs Developers] [HA] future of OpenStack OCF resource agents (was: resource-agents v4.2.0)

2018-10-24 Thread Adam Spiers

[cross-posting to openstack-dev]

Oyvind Albrigtsen  wrote:

ClusterLabs is happy to announce resource-agents v4.2.0.
Source code is available at:
https://github.com/ClusterLabs/resource-agents/releases/tag/v4.2.0

The most significant enhancements in this release are:
- new resource agents:


[snipped]


- openstack-cinder-volume
- openstack-floating-ip
- openstack-info


That's an interesting development.

By popular demand from the community, in Oct 2015 the canonical
location for OpenStack-specific resource agents became:

   https://git.openstack.org/cgit/openstack/openstack-resource-agents/

as announced here:

   http://lists.openstack.org/pipermail/openstack-dev/2015-October/077601.html

However I have to admit I have done a terrible job of maintaining it
since then.  Since OpenStack RAs are now beginning to creep into
ClusterLabs/resource-agents, now seems a good time to revisit this and
decide a coherent strategy.  I'm not religious either way, although I
do have a fairly strong preference for picking one strategy which both
ClusterLabs and OpenStack communities can align on, so that all
OpenStack RAs are in a single place.

I'll kick the bikeshedding off:

Pros of hosting OpenStack RAs on ClusterLabs


- ClusterLabs developers get the GitHub code review and Travis CI
 experience they expect.

- Receive all the same maintenance attention as other RAs - any
 changes to coding style, utility libraries, Pacemaker APIs,
 refactorings etc. which apply to all RAs would automatically
 get applied to the OpenStack RAs too.

- Documentation gets built in the same way as other RAs.

- Unit tests get run in the same way as other RAs (although does
 ocf-tester even get run by the CI currently?)

- Doesn't get maintained by me ;-)

Pros of hosting OpenStack RAs on OpenStack infrastructure
-

- OpenStack developers get the Gerrit code review and Zuul CI
 experience they expect.

- Releases and stable/foo branches could be made to align with
 OpenStack releases (..., Queens, Rocky, Stein, T(rains?)...)

- Automated testing could in the future spin up a full cloud
 and do integration tests by simulating failure scenarios,
 as discussed here:

 https://storyboard.openstack.org/#!/story/2002129

 That said, that is still very much work in progress, so
 it remains to be seen when that could come to fruition.

No doubt I've missed some pros and cons here.  At this point
personally I'm slightly leaning towards keeping them in the
openstack-resource-agents - but that's assuming I can either hand off
maintainership to someone with more time, or somehow find the time
myself to do a better job.

What does everyone else think?  All opinions are very welcome,
obviously.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Release-job-failures] Release of openstack-infra/shade failed

2018-10-24 Thread Thierry Carrez

Tony Breeds wrote:

AFACIT the upload was successful:

shade-1.27.2-py2-none-any.whl  : 2018-10-24T03:20:00 
d30a230461ba276c8bc561a27e61dcfd6769ca00bb4c652a841f7148a0d74a5a
shade-1.27.2-py2.py3-none-any.whl  : 2018-10-24T03:20:11 
8942b56d7d02740fb9c799a57f0c4ff13d300680c89e6f04dadb5eaa854e1792
shade-1.27.2.tar.gz: 2018-10-24T03:20:04 
ebf40040b892f3e9bd4229fd05fff7ea24a08c51e46b7f2d8b3901ce34f51cbf


Yes the release is up on Pypi and on releases.o.o, so I think we are good.


The strange thing is that the tar.gz was uploaded *befoer* the wheel
even though our publish jobs explictly do it in the other order and the
timestamp of the tar.gz doesn't match the error message.


The timestamps don't match, but in the logs the tar.gz is uploaded last, 
as designed... Where did you get the timestamps from ? If from Pypi 
maybe their clocks are off or they do some kind of processing that 
affects the timestamp...


--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat][senlin] Action Required. Idea to propose for a forum for autoscaling features integration

2018-10-24 Thread Rico Lin
Hi all, I'm glad to notify you all that our forum session has been accepted
[1] and that forum time schedule (Thursday, November 15, 9:50am-10:30am)
should be stable by now. So please save your schedule for it!!
Any feedback are welcome!



[1]
https://www.openstack.org/summit/berlin-2018/summit-schedule/events/22753/autoscaling-integration-improvement-and-feedback

On Tue, Oct 9, 2018 at 3:07 PM Rico Lin  wrote:

> a reminder for all, please put your ideas/thoughts/suggest actions in our
> etherpad [1],
> which we gonna use for further discussion in Forum, or in PTG if we got no
> forum for it.
> So we won't be missing anything.
>
>
>
> [1] https://etherpad.openstack.org/p/autoscaling-integration-and-feedback
>
> On Tue, Oct 9, 2018 at 2:22 PM Qiming Teng  wrote:
>
>> > >One approach would be to switch the underlying Heat AutoScalingGroup
>> > >implementation to use Senlin and then deprecate the AutoScalingGroup
>> > >resource type in favor of the Senlin resource type over several
>> > >cycles.
>> >
>> > The hard part (or one hard part, at least) of that is migrating the
>> existing
>> > data.
>>
>> Agreed. In an ideal world, we can transparently transplant the "scaling
>> group" resource implementation onto something (e.g. a library or an
>> interface). This sounds like an option for both teams to brainstorm
>> together.
>>
>> - Qiming
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> --
> May The Force of OpenStack Be With You,
>
> *Rico Lin*irc: ricolin
>
>

-- 
May The Force of OpenStack Be With You,

*Rico Lin*irc: ricolin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] [api] Paste Maintenance

2018-10-24 Thread Jean-Philippe Evrard
On Mon, 2018-10-22 at 07:50 -0700, Morgan Fainberg wrote:
> Also, doesn't bitbucket have a git interface now too (optionally)?
> 
It does :)
But I think it requires a new repo, so it means that could as well move
to somewhere else like github or openstack infra :p 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][upgrade-checkers] Week R-25 Update

2018-10-24 Thread Jean-Philippe Evrard
On Tue, 2018-10-23 at 16:40 -0500, Matt Riedemann wrote:
> On 10/23/2018 1:41 PM, Sean McGinnis wrote:
> > > Yeah, but part of the reason for placeholders was consistency
> > > across all of
> > > the services. I guess if there are never going to be upgrade
> > > checks in
> > > adjutant then I could see skipping it, but otherwise I would
> > > prefer to at
> > > least get the framework in place.
> > > 
> > +1
> > 
> > Even if there is nothing to check at this point, I think having the
> > facility
> > there is a benefit for projects and scripts that are going to be
> > consuming
> > these checks. Having nothing to check, but having the status check
> > there, is
> > going to be better than everything needing to keep a list of which
> > projects to
> > run the checks on and which not.
> > 
> 
> Sure, that works for me as well. I'm not against adding
> placeholder/noop 
> checks knowing that nothing is immediately obvious to replace those
> in 
> Stein, but could be done later when the opportunity arises. If it's 
> debatable on a per-project basis, then I'd defer to the core team
> for 
> the project.
> 

+1 on what Ben, Matt, and Sean said there.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev