Re: [openstack-dev] [nova] Can we deprecate the server backup API please?

2018-11-17 Thread Tim Bell
Mistral can schedule the executions and then a workflow to do the server image 
create. 

The CERN implementation of this is described at 
http://openstack-in-production.blogspot.com/2017/08/scheduled-snapshots.html 
with the implementation at 
https://gitlab.cern.ch/cloud-infrastructure/mistral-workflows. It is pretty 
generic but I don't know if anyone has tried to run it elsewhere.

A few features

- Schedule can be chosen
- Logs visible in Horizon
- Option to shutdown instances before and restart after
- Mails can be sent on success and/or failure
- Rotation of backups to keep a maximum number of copies

There are equivalent restore and clone functions in the workflow also.

Tim
-Original Message-
From: Jay Pipes 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, 16 November 2018 at 20:58
To: "openstack-dev@lists.openstack.org" 
Subject: [openstack-dev] [nova] Can we deprecate the server backup API  please?

The server backup API was added 8 years ago. It has Nova basically 
implementing a poor-man's cron for some unknown reason (probably because 
the original RAX Cloud Servers API had some similar or identical 
functionality, who knows...).

Can we deprecate this functionality please? It's confusing for end users 
to have an `openstack server image create` and `openstack server backup 
create` command where the latter does virtually the same thing as the 
former only sets up some whacky cron-like thing and deletes images after 
some number of rotations.

If a cloud provider wants to offer some backup thing as a service, they 
could implement this functionality separately IMHO, store the user's 
requested cronjob state in their own system (or in glance which is kind 
of how the existing Nova createBackup functionality works), and run a 
simple cronjob executor that ran `openstack server image create` and 
`openstack image delete` as needed.

This is a perfect example of an API that should never have been added to 
the Compute API, in my opinion, and removing it would be a step in the 
right direction if we're going to get serious about cleaning the Compute 
API up.

Thoughts?
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ClusterLabs Developers] [HA] future of OpenStack OCF resource agents (was: resource-agents v4.2.0)

2018-10-24 Thread Tim Bell
Adam,

Personally, I would prefer the approach where the OpenStack resource agents are 
part of the repository in which they are used. This is also the approach taken 
in other open source projects such as Kubernetes and avoids the inconsistency 
where, for example, Azure resource agents are in the Cluster Labs repository 
but OpenStack ones are not. This can mean that people miss there is OpenStack 
integration available.

This does not reflect, in any way, the excellent efforts and results made so 
far. I don't think it would negate the possibility to include testing in the 
OpenStack gate since there are other examples where code is pulled in from 
other sources. 

Tim

-Original Message-
From: Adam Spiers 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 24 October 2018 at 14:29
To: "develop...@clusterlabs.org" , openstack-dev 
mailing list 
Subject: Re: [openstack-dev] [ClusterLabs Developers] [HA] future of OpenStack 
OCF resource agents (was: resource-agents v4.2.0)

[cross-posting to openstack-dev]

Oyvind Albrigtsen  wrote:
>ClusterLabs is happy to announce resource-agents v4.2.0.
>Source code is available at:
>https://github.com/ClusterLabs/resource-agents/releases/tag/v4.2.0
>
>The most significant enhancements in this release are:
>- new resource agents:

[snipped]

> - openstack-cinder-volume
> - openstack-floating-ip
> - openstack-info

That's an interesting development.

By popular demand from the community, in Oct 2015 the canonical
location for OpenStack-specific resource agents became:

https://git.openstack.org/cgit/openstack/openstack-resource-agents/

as announced here:


http://lists.openstack.org/pipermail/openstack-dev/2015-October/077601.html

However I have to admit I have done a terrible job of maintaining it
since then.  Since OpenStack RAs are now beginning to creep into
ClusterLabs/resource-agents, now seems a good time to revisit this and
decide a coherent strategy.  I'm not religious either way, although I
do have a fairly strong preference for picking one strategy which both
ClusterLabs and OpenStack communities can align on, so that all
OpenStack RAs are in a single place.

I'll kick the bikeshedding off:

Pros of hosting OpenStack RAs on ClusterLabs


- ClusterLabs developers get the GitHub code review and Travis CI
  experience they expect.

- Receive all the same maintenance attention as other RAs - any
  changes to coding style, utility libraries, Pacemaker APIs,
  refactorings etc. which apply to all RAs would automatically
  get applied to the OpenStack RAs too.

- Documentation gets built in the same way as other RAs.

- Unit tests get run in the same way as other RAs (although does
  ocf-tester even get run by the CI currently?)

- Doesn't get maintained by me ;-)

Pros of hosting OpenStack RAs on OpenStack infrastructure
-

- OpenStack developers get the Gerrit code review and Zuul CI
  experience they expect.

- Releases and stable/foo branches could be made to align with
  OpenStack releases (..., Queens, Rocky, Stein, T(rains?)...)

- Automated testing could in the future spin up a full cloud
  and do integration tests by simulating failure scenarios,
  as discussed here:

  https://storyboard.openstack.org/#!/story/2002129

  That said, that is still very much work in progress, so
  it remains to be seen when that could come to fruition.

No doubt I've missed some pros and cons here.  At this point
personally I'm slightly leaning towards keeping them in the
openstack-resource-agents - but that's assuming I can either hand off
maintainership to someone with more time, or somehow find the time
myself to do a better job.

What does everyone else think?  All opinions are very welcome,
obviously.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [openstack-dev] Forum Schedule - Seeking Community Review

2018-10-16 Thread Tim Bell
Jimmy,

While it's not a clash within the forum, there are two sessions for Ironic 
scheduled at the same time on Tuesday at 14h20, each of which has Julia as a 
speaker.

Tim

-Original Message-
From: Jimmy McArthur 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Monday, 15 October 2018 at 22:04
To: "OpenStack Development Mailing List (not for usage questions)" 
, "OpenStack-operators@lists.openstack.org" 
, "commun...@lists.openstack.org" 

Subject: [openstack-dev] Forum Schedule - Seeking Community Review

Hi -

The Forum schedule is now up 
(https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262).  
If you see a glaring content conflict within the Forum itself, please 
let me know.

You can also view the Full Schedule in the attached PDF if that makes 
life easier...

NOTE: BoFs and WGs are still not all up on the schedule.  No need to let 
us know :)

Cheers,
Jimmy


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] Forum Schedule - Seeking Community Review

2018-10-16 Thread Tim Bell
Jimmy,

While it's not a clash within the forum, there are two sessions for Ironic 
scheduled at the same time on Tuesday at 14h20, each of which has Julia as a 
speaker.

Tim

-Original Message-
From: Jimmy McArthur 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Monday, 15 October 2018 at 22:04
To: "OpenStack Development Mailing List (not for usage questions)" 
, "openstack-operat...@lists.openstack.org" 
, "commun...@lists.openstack.org" 

Subject: [openstack-dev] Forum Schedule - Seeking Community Review

Hi -

The Forum schedule is now up 
(https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262).  
If you see a glaring content conflict within the Forum itself, please 
let me know.

You can also view the Full Schedule in the attached PDF if that makes 
life easier...

NOTE: BoFs and WGs are still not all up on the schedule.  No need to let 
us know :)

Cheers,
Jimmy


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-27 Thread Tim Bell

Lance,

The comment regarding ‘readers’ is more to explain that the distinction between 
‘admin’ and ‘user’ commands is gradually reducing, where OSC has been 
prioritising ‘user’ commands.

As an example, we give the CERN security team view-only access to many parts of 
the cloud. This allows them to perform their investigations independently.  
Thus, many commands which would be, by default, admin only are also available 
to roles such as the ‘readers’ (e.g. list, show, … of internals or projects 
which they are not in the members list)

I don’t think there is any implications for Keystone (and the readers role is a 
nice improvement to replace the previous manual policy definitions) but more of 
a question of which subcommands we should aim to support in OSC.

The *-manage commands such as nova-manage, I would consider, out of scope for 
OSC. Only admins would be migrating between versions or DB schemas.

Tim

From: Lance Bragstad 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 27 September 2018 at 15:30
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T 
series


On Wed, Sep 26, 2018 at 1:56 PM Tim Bell 
mailto:tim.b...@cern.ch>> wrote:

Doug,

Thanks for raising this. I'd like to highlight the goal "Finish moving legacy 
python-*client CLIs to python-openstackclient" from the etherpad and propose 
this for a T/U series goal.

To give it some context and the motivation:

At CERN, we have more than 3000 users of the OpenStack cloud. We write an 
extensive end user facing documentation which explains how to use the OpenStack 
along with CERN specific features (such as workflows for requesting 
projects/quotas/etc.).

One regular problem we come across is that the end user experience is 
inconsistent. In some cases, we find projects which are not covered by the 
unified OpenStack client (e.g. Manila). In other cases, there are subsets of 
the function which require the native project client.

I would strongly support a goal which targets

- All new projects should have the end user facing functionality fully exposed 
via the unified client
- Existing projects should aim to close the gap within 'N' cycles (N to be 
defined)
- Many administrator actions would also benefit from integration (reader roles 
are end users too so list and show need to be covered too)
- Users should be able to use a single openrc for all interactions with the 
cloud (e.g. not switch between password for some CLIs and Kerberos for OSC)

Sorry to back up the conversation a bit, but does reader role require work in 
the clients? Last release we incorporated three roles by default during 
keystone's installation process [0]. Is the definition in the specification 
what you mean by reader role, or am I on a different page?

[0] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/define-default-roles.html#default-roles

The end user perception of a solution will be greatly enhanced by a single 
command line tool with consistent syntax and authentication framework.

It may be a multi-release goal but it would really benefit the cloud consumers 
and I feel that goals should include this audience also.

Tim

-Original Message-
From: Doug Hellmann mailto:d...@doughellmann.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, 26 September 2018 at 18:00
To: openstack-dev 
mailto:openstack-dev@lists.openstack.org>>, 
openstack-operators 
mailto:openstack-operat...@lists.openstack.org>>,
 openstack-sigs 
mailto:openstack-s...@lists.openstack.org>>
Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T 
series

It's time to start thinking about community-wide goals for the T series.

We use community-wide goals to achieve visible common changes, push for
basic levels of consistency and user experience, and efficiently improve
certain areas where technical debt payments have become too high -
across all OpenStack projects. Community input is important to ensure
that the TC makes good decisions about the goals. We need to consider
the timing, cycle length, priority, and feasibility of the suggested
goals.

If you are interested in proposing a goal, please make sure that before
the summit it is described in the tracking etherpad [1] and that you
have started a mailing list thread on the openstack-dev list about the
proposal so that everyone in the forum session [2] has an opportunity to
consider the details.  The forum session is only one step in the
selection process. See [3] for more details.

Doug

[1] https://etherpad.openstack.org/p/community-goals
[2] https://www.openstack.org/summit/berlin-2018/vote-for-

Re: [Openstack-operators] [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-26 Thread Tim Bell

Doug,

Thanks for raising this. I'd like to highlight the goal "Finish moving legacy 
python-*client CLIs to python-openstackclient" from the etherpad and propose 
this for a T/U series goal.

To give it some context and the motivation:

At CERN, we have more than 3000 users of the OpenStack cloud. We write an 
extensive end user facing documentation which explains how to use the OpenStack 
along with CERN specific features (such as workflows for requesting 
projects/quotas/etc.). 

One regular problem we come across is that the end user experience is 
inconsistent. In some cases, we find projects which are not covered by the 
unified OpenStack client (e.g. Manila). In other cases, there are subsets of 
the function which require the native project client.

I would strongly support a goal which targets

- All new projects should have the end user facing functionality fully exposed 
via the unified client
- Existing projects should aim to close the gap within 'N' cycles (N to be 
defined)
- Many administrator actions would also benefit from integration (reader roles 
are end users too so list and show need to be covered too)
- Users should be able to use a single openrc for all interactions with the 
cloud (e.g. not switch between password for some CLIs and Kerberos for OSC)

The end user perception of a solution will be greatly enhanced by a single 
command line tool with consistent syntax and authentication framework.

It may be a multi-release goal but it would really benefit the cloud consumers 
and I feel that goals should include this audience also.

Tim

-Original Message-
From: Doug Hellmann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 26 September 2018 at 18:00
To: openstack-dev , openstack-operators 
, openstack-sigs 

Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T 
series

It's time to start thinking about community-wide goals for the T series.

We use community-wide goals to achieve visible common changes, push for
basic levels of consistency and user experience, and efficiently improve
certain areas where technical debt payments have become too high -
across all OpenStack projects. Community input is important to ensure
that the TC makes good decisions about the goals. We need to consider
the timing, cycle length, priority, and feasibility of the suggested
goals.

If you are interested in proposing a goal, please make sure that before
the summit it is described in the tracking etherpad [1] and that you
have started a mailing list thread on the openstack-dev list about the
proposal so that everyone in the forum session [2] has an opportunity to
consider the details.  The forum session is only one step in the
selection process. See [3] for more details.

Doug

[1] https://etherpad.openstack.org/p/community-goals
[2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814
[3] https://governance.openstack.org/tc/goals/index.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-26 Thread Tim Bell

Doug,

Thanks for raising this. I'd like to highlight the goal "Finish moving legacy 
python-*client CLIs to python-openstackclient" from the etherpad and propose 
this for a T/U series goal.

To give it some context and the motivation:

At CERN, we have more than 3000 users of the OpenStack cloud. We write an 
extensive end user facing documentation which explains how to use the OpenStack 
along with CERN specific features (such as workflows for requesting 
projects/quotas/etc.). 

One regular problem we come across is that the end user experience is 
inconsistent. In some cases, we find projects which are not covered by the 
unified OpenStack client (e.g. Manila). In other cases, there are subsets of 
the function which require the native project client.

I would strongly support a goal which targets

- All new projects should have the end user facing functionality fully exposed 
via the unified client
- Existing projects should aim to close the gap within 'N' cycles (N to be 
defined)
- Many administrator actions would also benefit from integration (reader roles 
are end users too so list and show need to be covered too)
- Users should be able to use a single openrc for all interactions with the 
cloud (e.g. not switch between password for some CLIs and Kerberos for OSC)

The end user perception of a solution will be greatly enhanced by a single 
command line tool with consistent syntax and authentication framework.

It may be a multi-release goal but it would really benefit the cloud consumers 
and I feel that goals should include this audience also.

Tim

-Original Message-
From: Doug Hellmann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 26 September 2018 at 18:00
To: openstack-dev , openstack-operators 
, openstack-sigs 

Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T 
series

It's time to start thinking about community-wide goals for the T series.

We use community-wide goals to achieve visible common changes, push for
basic levels of consistency and user experience, and efficiently improve
certain areas where technical debt payments have become too high -
across all OpenStack projects. Community input is important to ensure
that the TC makes good decisions about the goals. We need to consider
the timing, cycle length, priority, and feasibility of the suggested
goals.

If you are interested in proposing a goal, please make sure that before
the summit it is described in the tracking etherpad [1] and that you
have started a mailing list thread on the openstack-dev list about the
proposal so that everyone in the forum session [2] has an opportunity to
consider the details.  The forum session is only one step in the
selection process. See [3] for more details.

Doug

[1] https://etherpad.openstack.org/p/community-goals
[2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814
[3] https://governance.openstack.org/tc/goals/index.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [openstack-dev] [nova][publiccloud-wg] Proposal to shelve on stop/suspend

2018-09-15 Thread Tim Bell
Found the previous discussion at 
http://lists.openstack.org/pipermail/openstack-operators/2016-August/011321.html
 from 2016.

Tim

-Original Message-
From: Tim Bell 
Date: Saturday, 15 September 2018 at 14:38
To: "OpenStack Development Mailing List (not for usage questions)" 
, "openstack-operators@lists.openstack.org" 
, "openstack-s...@lists.openstack.org" 

Subject: Re: [openstack-dev] [nova][publiccloud-wg] Proposal to shelve on 
stop/suspend

One extra user motivation that came up during past forums was to have a 
different quota for shelved instances (or remove them from the project quota 
all together). Currently, I believe that a shelved instance still counts 
towards the instances/cores quota thus the reduction of usage by the user is 
not reflected in the quotas.

One discussion at the time was that the user is still reserving IPs so it 
is not zero resource usage and the instances still occupy storage.

(We disabled shelving for other reasons so I'm not able to check easily)

Tim

-Original Message-
From: Matt Riedemann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Saturday, 15 September 2018 at 01:27
To: "OpenStack Development Mailing List (not for usage questions)" 
, "openstack-operators@lists.openstack.org" 
, "openstack-s...@lists.openstack.org" 

Subject: [openstack-dev] [nova][publiccloud-wg] Proposal to shelve on   
stop/suspend

tl;dr: I'm proposing a new parameter to the server stop (and suspend?) 
APIs to control if nova shelve offloads the server.

Long form: This came up during the public cloud WG session this week 
based on a couple of feature requests [1][2]. When a user 
stops/suspends 
a server, the hypervisor frees up resources on the host but nova 
continues to track those resources as being used on the host so the 
scheduler can't put more servers there. What operators would like to do 
is that when a user stops a server, nova actually shelve offloads the 
server from the host so they can schedule new servers on that host. On 
start/resume of the server, nova would find a new host for the server. 
This also came up in Vancouver where operators would like to free up 
limited expensive resources like GPUs when the server is stopped. This 
is also the behavior in AWS.

The problem with shelve is that it's great for operators but users just 
don't use it, maybe because they don't know what it is and stop works 
just fine. So how do you get users to opt into shelving their server?

I've proposed a high-level blueprint [3] where we'd add a new 
(microversioned) parameter to the stop API with three options:

* auto
* offload
* retain

Naming is obviously up for debate. The point is we would default to 
auto 
and if auto is used, the API checks a config option to determine the 
behavior - offload or retain. By default we would retain for backward 
compatibility. For users that don't care, they get auto and it's fine. 
For users that do care, they either (1) don't opt into the microversion 
or (2) specify the specific behavior they want. I don't think we need 
to 
expose what the cloud's configuration for auto is because again, if you 
don't care then it doesn't matter and if you do care, you can opt out 
of 
this.

"How do we get users to use the new microversion?" I'm glad you asked.

Well, nova CLI defaults to using the latest available microversion 
negotiated between the client and the server, so by default, anyone 
using "nova stop" would get the 'auto' behavior (assuming the client 
and 
server are new enough to support it). Long-term, openstack client plans 
on doing the same version negotiation.

As for the server status changes, if the server is stopped and shelved, 
the status would be 'SHELVED_OFFLOADED' rather than 'SHUTDOWN'. I 
believe this is fine especially if a user is not being specific and 
doesn't care about the actual backend behavior. On start, the API would 
allow starting (unshelving) shelved offloaded (rather than just 
stopped) 
instances. Trying to hide shelved servers as stopped in the API would 
be 
overly complex IMO so I don't want to try and mask that.

It is possible that a user that stopped and shelved their server could 
hit a NoValidHost when starting (unshelving) the server, but that 
really 
shouldn't happen in a cloud that's configuring nova to shelve by 
default 
because if they are doing this, 

Re: [openstack-dev] [nova][publiccloud-wg] Proposal to shelve on stop/suspend

2018-09-15 Thread Tim Bell
Found the previous discussion at 
http://lists.openstack.org/pipermail/openstack-operators/2016-August/011321.html
 from 2016.

Tim

-Original Message-
From: Tim Bell 
Date: Saturday, 15 September 2018 at 14:38
To: "OpenStack Development Mailing List (not for usage questions)" 
, "openstack-operat...@lists.openstack.org" 
, "openstack-s...@lists.openstack.org" 

Subject: Re: [openstack-dev] [nova][publiccloud-wg] Proposal to shelve on 
stop/suspend

One extra user motivation that came up during past forums was to have a 
different quota for shelved instances (or remove them from the project quota 
all together). Currently, I believe that a shelved instance still counts 
towards the instances/cores quota thus the reduction of usage by the user is 
not reflected in the quotas.

One discussion at the time was that the user is still reserving IPs so it 
is not zero resource usage and the instances still occupy storage.

(We disabled shelving for other reasons so I'm not able to check easily)

Tim

-Original Message-
From: Matt Riedemann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Saturday, 15 September 2018 at 01:27
To: "OpenStack Development Mailing List (not for usage questions)" 
, "openstack-operat...@lists.openstack.org" 
, "openstack-s...@lists.openstack.org" 

Subject: [openstack-dev] [nova][publiccloud-wg] Proposal to shelve on   
stop/suspend

tl;dr: I'm proposing a new parameter to the server stop (and suspend?) 
APIs to control if nova shelve offloads the server.

Long form: This came up during the public cloud WG session this week 
based on a couple of feature requests [1][2]. When a user 
stops/suspends 
a server, the hypervisor frees up resources on the host but nova 
continues to track those resources as being used on the host so the 
scheduler can't put more servers there. What operators would like to do 
is that when a user stops a server, nova actually shelve offloads the 
server from the host so they can schedule new servers on that host. On 
start/resume of the server, nova would find a new host for the server. 
This also came up in Vancouver where operators would like to free up 
limited expensive resources like GPUs when the server is stopped. This 
is also the behavior in AWS.

The problem with shelve is that it's great for operators but users just 
don't use it, maybe because they don't know what it is and stop works 
just fine. So how do you get users to opt into shelving their server?

I've proposed a high-level blueprint [3] where we'd add a new 
(microversioned) parameter to the stop API with three options:

* auto
* offload
* retain

Naming is obviously up for debate. The point is we would default to 
auto 
and if auto is used, the API checks a config option to determine the 
behavior - offload or retain. By default we would retain for backward 
compatibility. For users that don't care, they get auto and it's fine. 
For users that do care, they either (1) don't opt into the microversion 
or (2) specify the specific behavior they want. I don't think we need 
to 
expose what the cloud's configuration for auto is because again, if you 
don't care then it doesn't matter and if you do care, you can opt out 
of 
this.

"How do we get users to use the new microversion?" I'm glad you asked.

Well, nova CLI defaults to using the latest available microversion 
negotiated between the client and the server, so by default, anyone 
using "nova stop" would get the 'auto' behavior (assuming the client 
and 
server are new enough to support it). Long-term, openstack client plans 
on doing the same version negotiation.

As for the server status changes, if the server is stopped and shelved, 
the status would be 'SHELVED_OFFLOADED' rather than 'SHUTDOWN'. I 
believe this is fine especially if a user is not being specific and 
doesn't care about the actual backend behavior. On start, the API would 
allow starting (unshelving) shelved offloaded (rather than just 
stopped) 
instances. Trying to hide shelved servers as stopped in the API would 
be 
overly complex IMO so I don't want to try and mask that.

It is possible that a user that stopped and shelved their server could 
hit a NoValidHost when starting (unshelving) the server, but that 
really 
shouldn't happen in a cloud that's configuring nova to shelve by 
default 
because if they are doing this, 

Re: [Openstack-operators] [openstack-dev] [nova][publiccloud-wg] Proposal to shelve on stop/suspend

2018-09-15 Thread Tim Bell
One extra user motivation that came up during past forums was to have a 
different quota for shelved instances (or remove them from the project quota 
all together). Currently, I believe that a shelved instance still counts 
towards the instances/cores quota thus the reduction of usage by the user is 
not reflected in the quotas.

One discussion at the time was that the user is still reserving IPs so it is 
not zero resource usage and the instances still occupy storage.

(We disabled shelving for other reasons so I'm not able to check easily)

Tim

-Original Message-
From: Matt Riedemann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Saturday, 15 September 2018 at 01:27
To: "OpenStack Development Mailing List (not for usage questions)" 
, "openstack-operators@lists.openstack.org" 
, "openstack-s...@lists.openstack.org" 

Subject: [openstack-dev] [nova][publiccloud-wg] Proposal to shelve on   
stop/suspend

tl;dr: I'm proposing a new parameter to the server stop (and suspend?) 
APIs to control if nova shelve offloads the server.

Long form: This came up during the public cloud WG session this week 
based on a couple of feature requests [1][2]. When a user stops/suspends 
a server, the hypervisor frees up resources on the host but nova 
continues to track those resources as being used on the host so the 
scheduler can't put more servers there. What operators would like to do 
is that when a user stops a server, nova actually shelve offloads the 
server from the host so they can schedule new servers on that host. On 
start/resume of the server, nova would find a new host for the server. 
This also came up in Vancouver where operators would like to free up 
limited expensive resources like GPUs when the server is stopped. This 
is also the behavior in AWS.

The problem with shelve is that it's great for operators but users just 
don't use it, maybe because they don't know what it is and stop works 
just fine. So how do you get users to opt into shelving their server?

I've proposed a high-level blueprint [3] where we'd add a new 
(microversioned) parameter to the stop API with three options:

* auto
* offload
* retain

Naming is obviously up for debate. The point is we would default to auto 
and if auto is used, the API checks a config option to determine the 
behavior - offload or retain. By default we would retain for backward 
compatibility. For users that don't care, they get auto and it's fine. 
For users that do care, they either (1) don't opt into the microversion 
or (2) specify the specific behavior they want. I don't think we need to 
expose what the cloud's configuration for auto is because again, if you 
don't care then it doesn't matter and if you do care, you can opt out of 
this.

"How do we get users to use the new microversion?" I'm glad you asked.

Well, nova CLI defaults to using the latest available microversion 
negotiated between the client and the server, so by default, anyone 
using "nova stop" would get the 'auto' behavior (assuming the client and 
server are new enough to support it). Long-term, openstack client plans 
on doing the same version negotiation.

As for the server status changes, if the server is stopped and shelved, 
the status would be 'SHELVED_OFFLOADED' rather than 'SHUTDOWN'. I 
believe this is fine especially if a user is not being specific and 
doesn't care about the actual backend behavior. On start, the API would 
allow starting (unshelving) shelved offloaded (rather than just stopped) 
instances. Trying to hide shelved servers as stopped in the API would be 
overly complex IMO so I don't want to try and mask that.

It is possible that a user that stopped and shelved their server could 
hit a NoValidHost when starting (unshelving) the server, but that really 
shouldn't happen in a cloud that's configuring nova to shelve by default 
because if they are doing this, their SLA needs to reflect they have the 
capacity to unshelve the server. If you can't honor that SLA, don't 
shelve by default.

So, what are the general feelings on this before I go off and start 
writing up a spec?

[1] https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1791681
[2] https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1791679
[3] https://blueprints.launchpad.net/nova/+spec/shelve-on-stop

-- 

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___

Re: [openstack-dev] [nova][publiccloud-wg] Proposal to shelve on stop/suspend

2018-09-15 Thread Tim Bell
One extra user motivation that came up during past forums was to have a 
different quota for shelved instances (or remove them from the project quota 
all together). Currently, I believe that a shelved instance still counts 
towards the instances/cores quota thus the reduction of usage by the user is 
not reflected in the quotas.

One discussion at the time was that the user is still reserving IPs so it is 
not zero resource usage and the instances still occupy storage.

(We disabled shelving for other reasons so I'm not able to check easily)

Tim

-Original Message-
From: Matt Riedemann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Saturday, 15 September 2018 at 01:27
To: "OpenStack Development Mailing List (not for usage questions)" 
, "openstack-operat...@lists.openstack.org" 
, "openstack-s...@lists.openstack.org" 

Subject: [openstack-dev] [nova][publiccloud-wg] Proposal to shelve on   
stop/suspend

tl;dr: I'm proposing a new parameter to the server stop (and suspend?) 
APIs to control if nova shelve offloads the server.

Long form: This came up during the public cloud WG session this week 
based on a couple of feature requests [1][2]. When a user stops/suspends 
a server, the hypervisor frees up resources on the host but nova 
continues to track those resources as being used on the host so the 
scheduler can't put more servers there. What operators would like to do 
is that when a user stops a server, nova actually shelve offloads the 
server from the host so they can schedule new servers on that host. On 
start/resume of the server, nova would find a new host for the server. 
This also came up in Vancouver where operators would like to free up 
limited expensive resources like GPUs when the server is stopped. This 
is also the behavior in AWS.

The problem with shelve is that it's great for operators but users just 
don't use it, maybe because they don't know what it is and stop works 
just fine. So how do you get users to opt into shelving their server?

I've proposed a high-level blueprint [3] where we'd add a new 
(microversioned) parameter to the stop API with three options:

* auto
* offload
* retain

Naming is obviously up for debate. The point is we would default to auto 
and if auto is used, the API checks a config option to determine the 
behavior - offload or retain. By default we would retain for backward 
compatibility. For users that don't care, they get auto and it's fine. 
For users that do care, they either (1) don't opt into the microversion 
or (2) specify the specific behavior they want. I don't think we need to 
expose what the cloud's configuration for auto is because again, if you 
don't care then it doesn't matter and if you do care, you can opt out of 
this.

"How do we get users to use the new microversion?" I'm glad you asked.

Well, nova CLI defaults to using the latest available microversion 
negotiated between the client and the server, so by default, anyone 
using "nova stop" would get the 'auto' behavior (assuming the client and 
server are new enough to support it). Long-term, openstack client plans 
on doing the same version negotiation.

As for the server status changes, if the server is stopped and shelved, 
the status would be 'SHELVED_OFFLOADED' rather than 'SHUTDOWN'. I 
believe this is fine especially if a user is not being specific and 
doesn't care about the actual backend behavior. On start, the API would 
allow starting (unshelving) shelved offloaded (rather than just stopped) 
instances. Trying to hide shelved servers as stopped in the API would be 
overly complex IMO so I don't want to try and mask that.

It is possible that a user that stopped and shelved their server could 
hit a NoValidHost when starting (unshelving) the server, but that really 
shouldn't happen in a cloud that's configuring nova to shelve by default 
because if they are doing this, their SLA needs to reflect they have the 
capacity to unshelve the server. If you can't honor that SLA, don't 
shelve by default.

So, what are the general feelings on this before I go off and start 
writing up a spec?

[1] https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1791681
[2] https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1791679
[3] https://blueprints.launchpad.net/nova/+spec/shelve-on-stop

-- 

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Re: [Openstack-operators] [openstack-dev] [all] Consistent policy names

2018-09-12 Thread Tim Bell
So +1

Tim

From: Lance Bragstad 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 12 September 2018 at 20:43
To: "OpenStack Development Mailing List (not for usage questions)" 
, OpenStack Operators 

Subject: [openstack-dev] [all] Consistent policy names

The topic of having consistent policy names has popped up a few times this 
week. Ultimately, if we are to move forward with this, we'll need a convention. 
To help with that a little bit I started an etherpad [0] that includes links to 
policy references, basic conventions *within* that service, and some examples 
of each. I got through quite a few projects this morning, but there are still a 
couple left.

The idea is to look at what we do today and see what conventions we can come up 
with to move towards, which should also help us determine how much each 
convention is going to impact services (e.g. picking a convention that will 
cause 70% of services to rename policies).

Please have a look and we can discuss conventions in this thread. If we come to 
agreement, I'll start working on some documentation in oslo.policy so that it's 
somewhat official because starting to renaming policies.

[0] https://etherpad.openstack.org/p/consistent-policy-names
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [all] Consistent policy names

2018-09-12 Thread Tim Bell
So +1

Tim

From: Lance Bragstad 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 12 September 2018 at 20:43
To: "OpenStack Development Mailing List (not for usage questions)" 
, OpenStack Operators 

Subject: [openstack-dev] [all] Consistent policy names

The topic of having consistent policy names has popped up a few times this 
week. Ultimately, if we are to move forward with this, we'll need a convention. 
To help with that a little bit I started an etherpad [0] that includes links to 
policy references, basic conventions *within* that service, and some examples 
of each. I got through quite a few projects this morning, but there are still a 
couple left.

The idea is to look at what we do today and see what conventions we can come up 
with to move towards, which should also help us determine how much each 
convention is going to impact services (e.g. picking a convention that will 
cause 70% of services to rename policies).

Please have a look and we can discuss conventions in this thread. If we come to 
agreement, I'll start working on some documentation in oslo.policy so that it's 
somewhat official because starting to renaming policies.

[0] https://etherpad.openstack.org/p/consistent-policy-names
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] leaving Openstack mailing lists

2018-09-06 Thread Tim Bell
Saverio,

And thanks for all your hard work with the openstack community, especially the 
Swiss OpenStack user group (https://www.meetup.com/openstack-ch/)

Hope to have a chance to work again together in the future.

Tim

From: Jimmy McArthur 
Date: Thursday, 6 September 2018 at 18:06
To: Amy 
Cc: "openstack-oper." 
Subject: Re: [Openstack-operators] leaving Openstack mailing lists

Make that a pleasure. Not a pressure. :\

Jimmy McArthur wrote:

Thank you Saverio! It was a pressure working with you, if only briefly.  Best 
of luck at your new gig and hope to see you around OpenStack land soon!

Cheers,
Jimmy

Amy wrote:

Saverio,

It was a pleasure working with you on the UC. Good luck in the new position and 
hopefully you’ll be back.

Thanks for all that you did,

Amy (spotz)
Sent from my iPhone

On Sep 6, 2018, at 6:59 AM, Blair Bethwaite 
mailto:blair.bethwa...@gmail.com>> wrote:
Good luck with whatever you are doing next Saverio, you've been a great asset 
to the community and will be missed!

On Thu, 6 Sep 2018 at 23:43, Saverio Proto 
mailto:ziopr...@gmail.com>> wrote:
Hello,

I will be leaving this mailing list in a few days.

I am going to a new job and I will not be involved with Openstack at
least in the short term future.
Still, it was great working with the Openstack community in the past few years.

If you need to reach me about any bug/patch/review that I submitted in
the past, just write directly to my email. I will try to give answers.

Cheers

Saverio

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


--
Cheers,
~Blairo
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___

OpenStack-operators mailing list

OpenStack-operators@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



___

OpenStack-operators mailing list

OpenStack-operators@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-29 Thread Tim Bell

Given the partial retirement scenario (i.e. only racks A-C retired due to 
cooling contrainsts, racks D-F still active with same old hardware but still 
useful for years), adding new hardware to old cells would not be non-optimal. 
I'm ignoring the long list of other things to worry such as preserving IP 
addresses etc.

Sounds like a good topic for PTG/Forum?

Tim

-Original Message-
From: Jay Pipes 
Date: Wednesday, 29 August 2018 at 22:12
To: Dan Smith , Tim Bell 
Cc: "openstack-operators@lists.openstack.org" 

Subject: Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold 
migration

On 08/29/2018 04:04 PM, Dan Smith wrote:
>> - The VMs to be migrated are not generally not expensive
>> configurations, just hardware lifecycles where boxes go out of
>> warranty or computer centre rack/cooling needs re-organising. For
>> CERN, this is a 6-12 month frequency of ~10,000 VMs per year (with a
>> ~30% pet share)
>> - We make a cell from identical hardware at a single location, this
>> greatly simplifies working out hardware issues, provisioning and
>> management
>> - Some cases can be handled with the 'please delete and
>> re-create'. Many other cases need much user support/downtime (and
>> require significant effort or risk delaying retirements to get
>> agreement)
> 
> Yep, this is the "organizational use case" of cells I refer to. I assume
> that if one aisle (cell) is being replaced, it makes sense to stand up
> the new one as its own cell, migrate the pets from one to the other and
> then decommission the old one. Being only an aisle away, it's reasonable
> to think that *this* situation might not suffer from the complexity of
> needing to worry about heavyweight migrate network and storage.

For this use case, why not just add the new hardware directly into the 
existing cell and migrate the workloads onto the new hardware, then 
disable the old hardware and retire it?

I mean, there might be a short period of time where the cell's DB and MQ 
would be congested due to lots of migration operations, but it seems a 
lot simpler to me than trying to do cross-cell migrations when cells 
have been designed pretty much from the beginning of cellsv2 to not talk 
to each other or allow any upcalls.

Thoughts?
-jay


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold migration

2018-08-29 Thread Tim Bell
I've not followed all the arguments here regarding internals but CERN's 
background usage of Cells v2 (and thoughts on impact of cross cell migration) 
is below. Some background at 
https://www.openstack.org/videos/vancouver-2018/moving-from-cellsv1-to-cellsv2-at-cern.
 Some rough parameters with the team providing more concrete numbers if 
needed

- The VMs to be migrated are not generally not expensive configurations, just 
hardware lifecycles where boxes go out of warranty or computer centre 
rack/cooling needs re-organising. For CERN, this is a 6-12 month frequency of 
~10,000 VMs per year (with a ~30% pet share)
- We make a cell from identical hardware at a single location, this greatly 
simplifies working out hardware issues, provisioning and management
- Some cases can be handled with the 'please delete and re-create'. Many other 
cases need much user support/downtime (and require significant effort or risk 
delaying retirements to get agreement)
- When a new hardware delivery is made, we would hope to define a new cell (as 
it is a different configuration)
- Depending on the facilities retirement plans, we would work out what needed 
to be moved to new resources
- There are many different scenarios for migration (either live or cold)
-- All instances in the old cell would be migrated to the new hardware which 
would have sufficient capacity
-- All instances in a single cell would be migrated to several different cells 
such as the new cells being smaller
-- Some instances would be migrated because those racks need to be retired but 
other servers in the cell would remain for a further year or two until 
retirement was mandatory

With many cells and multiple locations, spreading the hypervisors across the 
cells in anticipation of potential migrations is unattractive.

From my understanding, these models were feasible with Cells V1.

We can discuss further, at the PTG or Summit, on the operational flexibility 
which we have taken advantage of so far and alternative models.

Tim

-Original Message-
From: Dan Smith 
Date: Wednesday, 29 August 2018 at 18:47
To: Jay Pipes 
Cc: "openstack-operators@lists.openstack.org" 

Subject: Re: [Openstack-operators] [nova][cinder][neutron] Cross-cell cold  
migration

> A release upgrade dance involves coordination of multiple moving
> parts. It's about as similar to this scenario as I can imagine. And
> there's a reason release upgrades are not done entirely within Nova;
> clearly an external upgrade tool or script is needed to orchestrate
> the many steps and components involved in the upgrade process.

I'm lost here, and assume we must be confusing terminology or something.

> The similar dance for cross-cell migration is the coordination that
> needs to happen between Nova, Neutron and Cinder. It's called
> orchestration for a reason and is not what Nova is good at (as we've
> repeatedly seen)

Most other operations in Nova meet this criteria. Boot requires
coordination between Nova, Cinder, and Neutron. As do migrate, start,
stop, evacuate. We might decide that (for now) the volume migration
thing is beyond the line we're willing to cross, and that's cool, but I
think it's an arbitrary limitation we shouldn't assume is
impossible. Moving instances around *is* what nova is (supposed to be)
good at.

> The thing that makes *this* particular scenario problematic is that
> cells aren't user-visible things. User-visible things could much more
> easily be orchestrated via external actors, as I still firmly believe
> this kind of thing should be done.

I'm having a hard time reconciling these:

1. Cells aren't user-visible, and shouldn't be (your words and mine).
2. Cross-cell migration should be done by an external service (your
   words).
3. External services work best when things are user-visible (your words).

You say the user-invisible-ness makes orchestrating this externally
difficult and I agree, but...is your argument here just that it
shouldn't be done at all?

>> As we discussed in YVR most recently, it also may become an important
>> thing for operators and users where expensive accelerators are committed
>> to instances with part-time usage patterns.
>
> I don't think that's a valid use case in respect to this scenario of
> cross-cell migration.

You're right, it has nothing to do with cross-cell migration at all. I
was pointing to *other* legitimate use cases for shelve.

> Also, I'd love to hear from anyone in the real world who has
> successfully migrated (live or otherwise) an instance that "owns"
> expensive hardware (accelerators, SR-IOV PFs, GPUs or otherwise).

Again, the accelerator case has nothing to do with migrating across
cells, but merely demonstrates another example of where shelve may be
the thing operators 

Re: [openstack-dev] [tripleo][puppet] Hello all, puppet modules

2018-05-31 Thread Tim Bell
CERN use these puppet modules too and contributes any missing functionality we 
need upstream.

Tim

From: Alex Schultz 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 31 May 2018 at 16:24
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [tripleo][puppet] Hello all, puppet modules



On Wed, May 30, 2018 at 3:18 PM, Remo Mattei mailto:r...@rm.ht>> 
wrote:
Hello all,
I have talked to several people about this and I would love to get this 
finalized once and for all. I have checked the OpenStack puppet modules which 
are mostly developed by the Red Hat team, as of right now, TripleO is using a 
combo of Ansible and puppet to deploy but in the next couple of releases, the 
plan is to move away from the puppet option.


So the OpenStack puppet modules are maintained by others other than Red Hat, 
however we have been a major contributor since TripleO has relied on them for 
some time.  That being said, as TripleO has migrated to containers built with 
Kolla, we've adapted our deployment mechanism to include Ansible and we really 
only use puppet for configuration generation.  Our goal for TripleO is to 
eventually be fully containerized which isn't something the puppet modules 
support today and I'm not sure is on the road map.


So consequently, what will be the plan of TripleO and the puppet modules?


As TripleO moves forward, we may continue to support deployments via puppet 
modules but the amount of testing that we'll be including upstream will mostly 
exercise external Ansible integrations (example, ceph-ansible, 
openshift-ansible, etc) and Kolla containers.  As of Queens, most of the 
services deployed via TripleO are deployed via containers and not on baremetal 
via puppet. We no longer support deploying OpenStack services on baremetal via 
the puppet modules and will likely be removing this support in the code in 
Stein.  The end goal will likely be moving away from puppet modules within 
TripleO if we can solve the backwards compatibility and configuration 
generation via other mechanism.  We will likely recommend leveraging external 
Ansible role calls rather than including puppet modules and using those to 
deploy services that are not inherently supported by TripleO.  I can't really 
give a time frame as we are still working out the details, but it is likely 
that over the next several cycles we'll see a reduction in the dependence of 
puppet in TripleO and an increase in leveraging available Ansible roles.


From the Puppet OpenStack standpoint, others are stepping up to continue to 
ensure the modules are available and I know I'll keep an eye on them for as 
long as TripleO leverages some of the functionality.  The Puppet OpenStack 
modules are very stable but I'm not sure without additional community folks 
stepping up that there will be support for newer functionality being added by 
the various OpenStack projects.  I'm sure others can chime in here on their 
usage/plans for the Puppet OpenStack modules.


Hope that helps.


Thanks,
-Alex


Thanks

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

2018-05-24 Thread Tim Bell
I'd like to understand the phrase "StarlingX is an OpenStack Foundation Edge 
focus area project".

My understanding of the current situation is that "StarlingX would like to be 
OpenStack Foundation Edge focus area project".

I have not been able to keep up with all of the discussions so I'd be happy for 
further URLs to help me understand the current situation and the processes 
(formal/informal) to arrive at this conclusion.

Tim

-Original Message-
From: Dean Troyer 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 23 May 2018 at 11:08
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

On Wed, May 23, 2018 at 11:49 AM, Colleen Murphy  
wrote:
> It's also important to make the distinction between hosting something on 
openstack.org infrastructure and recognizing it in an official capacity. 
StarlingX is seeking both, but in my opinion the code hosting is not the 
problem here.

StarlingX is an OpenStack Foundation Edge focus area project and is
seeking to use the CI infrastructure.  There may be a project or two
contained within that may make sense as OpenStack projects in the
not-called-big-tent-anymore sense but that is not on the table, there
is a lot of work to digest before we could even consider that.  Is
that the official capacity you are talking about?

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-20

2018-05-15 Thread Tim Bell
From my memory, the LCOO was started in 2015 or 2016. The UC was started at the 
end of 2012, start of 2013 (https://www.openstack.org/blog/?p=3777) with Ryan, 
JC and I.

Tim

-Original Message-
From: Graham Hayes 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, 15 May 2018 at 18:22
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-20

..

> # LCOO
> 
> There's been some concern expressed about the The Large Contributing
> OpenStack Operators (LCOO) group and the way they operate. They use
> an [Atlassian Wiki](https://openstack-lcoo.atlassian.net/) and
> Slack, and have restricted membership. These things tend to not
> align with the norms for tool usage and collaboration in OpenStack.
> This topic came up in [late
> 
April](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-26.log.html#t2018-04-26T14:39:36)
> 
> but is worth revisiting in Vancouver.

From what I understand, this group came into being before the UC was
created - a joint UC/TC/LCOO sync up in Vancouver is probably a good
idea.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Windows images into OpenStack

2018-05-11 Thread Tim Bell


Watch out for the cores/sockets properties too. Desktop Windows can limit the 
available resources if every core is a different socket. See 
http://clouddocs.web.cern.ch/clouddocs/details/image_properties.html

Tim

-Original Message-
From: Chris Friesen 
Date: Friday, 11 May 2018 at 19:05
To: "openstack@lists.openstack.org" 
Subject: Re: [Openstack] Windows images into OpenStack

On 05/11/2018 10:30 AM, Remo Mattei wrote:
> Hello guys, I have a need now to get a Windows VM into the OpenStack 
deployment. Can anyone suggest the best way to do this. I have done mostly 
Linux. I could use the ISO and build one within OpenStack not sure I want to go 
that route. I have some Windows that are coming from VMWare.

Here are the instructions if you choose to go the ISO route:

https://docs.openstack.org/image-guide/windows-image.html

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack-operators] [openstack-dev] [nova] Default scheduler filters survey

2018-05-01 Thread Tim Bell
You may also need something like pre-emptible instances to arrange the clean up 
of opportunistic VMs when the owner needs his resources back. Some details on 
the early implementation at 
http://openstack-in-production.blogspot.fr/2018/02/maximizing-resource-utilization-with.html.

If you're in Vancouver, we'll be having a Forum session on this 
(https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21787/pre-emptible-instances-the-way-forward)
 and notes welcome on the etherpad 
(https://etherpad.openstack.org/p/YVR18-pre-emptible-instances)

It would be good to find common implementations since this is a common scenario 
in the academic and research communities.

Tim

-Original Message-
From: Dave Holland 
Date: Tuesday, 1 May 2018 at 10:40
To: Mathieu Gagné 
Cc: "OpenStack Development Mailing List (not for usage questions)" 
, openstack-operators 

Subject: Re: [Openstack-operators] [openstack-dev] [nova] Default scheduler 
filters survey

On Mon, Apr 30, 2018 at 12:41:21PM -0400, Mathieu Gagné wrote:
> Weighers for baremetal cells:
> * ReservedHostForTenantWeigher [7]
...
> [7] Used to favor reserved host over non-reserved ones based on project.

Hello Mathieu,

we are considering writing something like this, for virtual machines not
for baremetal. Our use case is that a project buying some compute
hardware is happy for others to use it, but when the compute "owner"
wants sole use of it, other projects' instances must be migrated off or
killed; a scheduler weigher like this might help us to minimise the
number of instances needing migration or termination at that point.
Would you be willing to share your source code please?

thanks,
Dave
-- 
** Dave Holland ** Systems Support -- Informatics Systems Group **
** 01223 496923 **Wellcome Sanger Institute, Hinxton, UK**


-- 
 The Wellcome Sanger Institute is operated by Genome Research 
 Limited, a charity registered in England with number 1021457 and a 
 company registered in England with number 2742969, whose registered 
 office is 215 Euston Road, London, NW1 2BE. 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [Openstack-operators] [nova] Default scheduler filters survey

2018-05-01 Thread Tim Bell
You may also need something like pre-emptible instances to arrange the clean up 
of opportunistic VMs when the owner needs his resources back. Some details on 
the early implementation at 
http://openstack-in-production.blogspot.fr/2018/02/maximizing-resource-utilization-with.html.

If you're in Vancouver, we'll be having a Forum session on this 
(https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21787/pre-emptible-instances-the-way-forward)
 and notes welcome on the etherpad 
(https://etherpad.openstack.org/p/YVR18-pre-emptible-instances)

It would be good to find common implementations since this is a common scenario 
in the academic and research communities.

Tim

-Original Message-
From: Dave Holland 
Date: Tuesday, 1 May 2018 at 10:40
To: Mathieu Gagné 
Cc: "OpenStack Development Mailing List (not for usage questions)" 
, openstack-operators 

Subject: Re: [Openstack-operators] [openstack-dev] [nova] Default scheduler 
filters survey

On Mon, Apr 30, 2018 at 12:41:21PM -0400, Mathieu Gagné wrote:
> Weighers for baremetal cells:
> * ReservedHostForTenantWeigher [7]
...
> [7] Used to favor reserved host over non-reserved ones based on project.

Hello Mathieu,

we are considering writing something like this, for virtual machines not
for baremetal. Our use case is that a project buying some compute
hardware is happy for others to use it, but when the compute "owner"
wants sole use of it, other projects' instances must be migrated off or
killed; a scheduler weigher like this might help us to minimise the
number of instances needing migration or termination at that point.
Would you be willing to share your source code please?

thanks,
Dave
-- 
** Dave Holland ** Systems Support -- Informatics Systems Group **
** 01223 496923 **Wellcome Sanger Institute, Hinxton, UK**


-- 
 The Wellcome Sanger Institute is operated by Genome Research 
 Limited, a charity registered in England with number 1021457 and a 
 company registered in England with number 2742969, whose registered 
 office is 215 Euston Road, London, NW1 2BE. 

___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

2018-04-26 Thread Tim Bell
My worry with changing the default is that it would be like adding the 
following in /etc/environment,

alias ls=' rm -rf / --no-preserve-root'

i.e. an operation which was previously read-only now becomes irreversible.

We also have current use cases with Ironic where we are moving machines between 
projects by 'disowning' them to the spare pool and then reclaiming them (by 
UUID) into new projects with the same state.

However, other operators may feel differently which is why I suggest asking 
what people feel about changing the default.

In any case, changes in default behaviour need to be highly visible.

Tim

-Original Message-
From: "arkady.kanev...@dell.com" <arkady.kanev...@dell.com>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Date: Thursday, 26 April 2018 at 18:48
To: "openstack-dev@lists.openstack.org" <openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

+1.
It would be good to also identify the use cases.
Surprised that node should be cleaned up automatically.
I would expect that we want it to be a deliberate request from 
administrator to do.
Maybe user when they "return" a node to free pool after baremetal usage.
Thanks,
    Arkady

-Original Message-
From: Tim Bell [mailto:tim.b...@cern.ch] 
Sent: Thursday, April 26, 2018 11:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

How about asking the operators at the summit Forum or asking on 
openstack-operators to see what the users think?

Tim

-Original Message-
From: Ben Nemec <openst...@nemebean.com>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Date: Thursday, 26 April 2018 at 17:39
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>, Dmitry Tantsur <dtant...@redhat.com>
Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by default?



On 04/26/2018 09:24 AM, Dmitry Tantsur wrote:
> Answering to both James and Ben inline.
> 
> On 04/25/2018 05:47 PM, Ben Nemec wrote:
>>
>>
>> On 04/25/2018 10:28 AM, James Slagle wrote:
>>> On Wed, Apr 25, 2018 at 10:55 AM, Dmitry Tantsur 
>>> <dtant...@redhat.com> wrote:
>>>> On 04/25/2018 04:26 PM, James Slagle wrote:
>>>>>
>>>>> On Wed, Apr 25, 2018 at 9:14 AM, Dmitry Tantsur 
<dtant...@redhat.com>
>>>>> wrote:
>>>>>>
>>>>>> Hi all,
>>>>>>
>>>>>> I'd like to restart conversation on enabling node automated 
>>>>>> cleaning by
>>>>>> default for the undercloud. This process wipes partitioning 
tables
>>>>>> (optionally, all the data) from overcloud nodes each time they 
>>>>>> move to
>>>>>> "available" state (i.e. on initial enrolling and after each tear 
>>>>>> down).
>>>>>>
>>>>>> We have had it disabled for a few reasons:
>>>>>> - it was not possible to skip time-consuming wiping if data from 
>>>>>> disks
>>>>>> - the way our workflows used to work required going between 
>>>>>> manageable
>>>>>> and
>>>>>> available steps several times
>>>>>>
>>>>>> However, having cleaning disabled has several issues:
>>>>>> - a configdrive left from a previous deployment may confuse 
>>>>>> cloud-init
>>>>>> - a bootable partition left from a previous deployment may take
>>>>>> precedence
>>>>>> in some BIOS
>>>>>> - an UEFI boot partition left from a previous deployment is 
likely to
>>>>>> confuse UEFI firmware
>>>>>> - apparently ceph does not work correctly without cleaning (I'll 
>>>>>> defer to
>>>>>> the storage team to comment)
>>>>>>
>>>>>> For these reasons we don't recommend ha

Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

2018-04-26 Thread Tim Bell
How about asking the operators at the summit Forum or asking on 
openstack-operators to see what the users think?

Tim

-Original Message-
From: Ben Nemec 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 26 April 2018 at 17:39
To: "OpenStack Development Mailing List (not for usage questions)" 
, Dmitry Tantsur 
Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by default?



On 04/26/2018 09:24 AM, Dmitry Tantsur wrote:
> Answering to both James and Ben inline.
> 
> On 04/25/2018 05:47 PM, Ben Nemec wrote:
>>
>>
>> On 04/25/2018 10:28 AM, James Slagle wrote:
>>> On Wed, Apr 25, 2018 at 10:55 AM, Dmitry Tantsur 
>>>  wrote:
 On 04/25/2018 04:26 PM, James Slagle wrote:
>
> On Wed, Apr 25, 2018 at 9:14 AM, Dmitry Tantsur 
> wrote:
>>
>> Hi all,
>>
>> I'd like to restart conversation on enabling node automated 
>> cleaning by
>> default for the undercloud. This process wipes partitioning tables
>> (optionally, all the data) from overcloud nodes each time they 
>> move to
>> "available" state (i.e. on initial enrolling and after each tear 
>> down).
>>
>> We have had it disabled for a few reasons:
>> - it was not possible to skip time-consuming wiping if data from 
>> disks
>> - the way our workflows used to work required going between 
>> manageable
>> and
>> available steps several times
>>
>> However, having cleaning disabled has several issues:
>> - a configdrive left from a previous deployment may confuse 
>> cloud-init
>> - a bootable partition left from a previous deployment may take
>> precedence
>> in some BIOS
>> - an UEFI boot partition left from a previous deployment is likely to
>> confuse UEFI firmware
>> - apparently ceph does not work correctly without cleaning (I'll 
>> defer to
>> the storage team to comment)
>>
>> For these reasons we don't recommend having cleaning disabled, and I
>> propose
>> to re-enable it.
>>
>> It has the following drawbacks:
>> - The default workflow will require another node boot, thus becoming
>> several
>> minutes longer (incl. the CI)
>> - It will no longer be possible to easily restore a deleted overcloud
>> node.
>
>
> I'm trending towards -1, for these exact reasons you list as
> drawbacks. There has been no shortage of occurrences of users who have
> ended up with accidentally deleted overclouds. These are usually
> caused by user error or unintended/unpredictable Heat operations.
> Until we have a way to guarantee that Heat will never delete a node,
> or Heat is entirely out of the picture for Ironic provisioning, then
> I'd prefer that we didn't enable automated cleaning by default.
>
> I believe we had done something with policy.json at one time to
> prevent node delete, but I don't recall if that protected from both
> user initiated actions and Heat actions. And even that was not enabled
> by default.
>
> IMO, we need to keep "safe" defaults. Even if it means manually
> documenting that you should clean to prevent the issues you point out
> above. The alternative is to have no way to recover deleted nodes by
> default.


 Well, it's not clear what is "safe" here: protect people who explicitly
 delete their stacks or protect people who don't realize that a previous
 deployment may screw up their new one in a subtle way.
>>>
>>> The latter you can recover from, the former you can't if automated
>>> cleaning is true.
> 
> Nor can we recover from 'rm -rf / --no-preserve-root', but it's not a 
> reason to disable the 'rm' command :)
> 
>>>
>>> It's not just about people who explicitly delete their stacks (whether
>>> intentional or not). There could be user error (non-explicit) or
>>> side-effects triggered by Heat that could cause nodes to get deleted.
> 
> If we have problems with Heat, we should fix Heat or stop using it. What 
> you're saying is essentially "we prevent ironic from doing the right 
> thing because we're using a tool that can invoke 'rm -rf /' at a wrong 
> moment."
> 
>>>
>>> You couldn't recover from those scenarios if automated cleaning were
>>> true. Whereas you could always fix a deployment error by opting in to
>>> do an automated clean. Does Ironic 

[Openstack-operators] 4K block size

2018-04-23 Thread Tim Bell

Has anyone experience of working with local disks or volumes with 
physical/logical block sizes of 4K rather than 512?

There seems to be KVM support for this 
(http://fibrevillage.com/sysadmin/216-how-to-make-qemu-kvm-accept-4k-sector-sized-disks)
 but I could not see how to get the appropriate flavors/volumes in an OpenStack 
environment?

Is there any performance improvement from moving to 4K rather than 512 byte 
sectors?

Tim

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier?

2018-04-23 Thread Tim Bell

One of the challenges in the academic sector is the time from lightbulb moment 
to code commit. Many of the academic resource opportunities are short term 
(e.g. PhDs, student projects, government funded projects) and there is a 
latency in current system to onboard, get the appropriate recognition in the 
community (such as by reviewing other changes) and then get the code committed. 
 This is a particular problem for the larger projects where the patch is not in 
one of the project goal areas for that release.

Not sure what the solution is but I would agree that there is a significant 
opportunity.

Tim

-Original Message-
From: Thierry Carrez 
Organization: OpenStack
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Monday, 23 April 2018 at 18:11
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [tc] campaign question: How can we make 
contributing to OpenStack easier?

> Where else should we be looking for contributors?

Like other large open source projects, OpenStack has a lot of visibility
in the academic sector. I feel like we are less successful than others
in attracting contributions from there, and we could do a lot better by
engaging with them more directly.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Default scheduler filters survey

2018-04-18 Thread Tim Bell
I'd suggest asking on the openstack-operators list since there is only a subset 
of operators who follow openstack-dev.

Tim

-Original Message-
From: Chris Friesen 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 18 April 2018 at 18:34
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [nova] Default scheduler filters survey

On 04/18/2018 09:17 AM, Artom Lifshitz wrote:

> To that end, we'd like to know what filters operators are enabling in
> their deployment. If you can, please reply to this email with your
> [filter_scheduler]/enabled_filters (or
> [DEFAULT]/scheduler_default_filters if you're using an older version)
> option from nova.conf. Any other comments are welcome as well :)

RetryFilter
ComputeFilter
AvailabilityZoneFilter
AggregateInstanceExtraSpecsFilter
ComputeCapabilitiesFilter
ImagePropertiesFilter
NUMATopologyFilter
ServerGroupAffinityFilter
ServerGroupAntiAffinityFilter
PciPassthroughFilter


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap

2018-04-04 Thread Tim Bell
How about


  *   As an operator, I’d like to spin up the latest release to check if a 
problem is fixed before reporting a problem upstream

We use this approach frequently with packstack. Ideally (as today with 
packstack), we’d do this inside a VM on a running OpenStack cloud… inception… ☺

Tim

From: Emilien Macchi 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 29 March 2018 at 23:35
To: OpenStack Development Mailing List 
Subject: [openstack-dev] [tripleo] PTG session about All-In-One installer: 
recap & roadmap

Greeting folks,

During the last PTG we spent time discussing some ideas around an All-In-One 
installer, using 100% of the TripleO bits to deploy a single node OpenStack 
very similar with what we have today with the containerized undercloud and what 
we also have with other tools like Packstack or Devstack.

https://etherpad.openstack.org/p/tripleo-rocky-all-in-one

One of the problems that we're trying to solve here is to give a simple tool 
for developers so they can both easily and quickly deploy an OpenStack for 
their needs.

"As a developer, I need to deploy OpenStack in a VM on my laptop, quickly and 
without complexity, reproducing the same exact same tooling as TripleO is 
using."
"As a Neutron developer, I need to develop a feature in Neutron and test it 
with TripleO in my local env."
"As a TripleO dev, I need to implement a new service and test its deployment in 
my local env."
"As a developer, I need to reproduce a bug in TripleO CI that blocks the 
production chain, quickly and simply."

Probably more use cases, but to me that's what came into my mind now.

Dan kicked-off a doc patch a month ago: https://review.openstack.org/#/c/547038/
And I just went ahead and proposed a blueprint: 
https://blueprints.launchpad.net/tripleo/+spec/all-in-one
So hopefully we can start prototyping something during Rocky.

Before talking about the actual implementation, I would like to gather feedback 
from people interested by the use-cases. If you recognize yourself in these 
use-cases and you're not using TripleO today to test your things because it's 
too complex to deploy, we want to hear from you.
I want to see feedback (positive or negative) about this idea. We need to 
gather ideas, use cases, needs, before we go design a prototype in Rocky.

Thanks everyone who'll be involved,
--
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] baremetal firmware lifecycle management

2018-03-30 Thread Tim Bell
We've experienced different firmware update approaches.. this is a wish list 
rather than a requirement since in the end, it can all be scripted if needed. 
Currently, these are manpower intensive and require a lot of co-ordination 
since the upgrade operation has to be performed by the hardware support team 
but the end user defines the intervention window.

a. Some BMC updates can be applied out of band, over the network with 
appropriate BMC rights. It would be very nice if Ironic could orchestrate these 
updates since they can be painful to organise. One aspect of this would be for 
Ironic to orchestrate the updates and keep track of success/failure along with 
the current version of the BMC firmware (maybe as a property?). Typical example 
of this is when a security flaw is found in a particular hardware model BMC and 
we want to update to the latest version given an image provided by the vendor.

b. A set of machines have been delivered but an incorrect BIOS setting is 
found. We want to reflash the BIOSes with the latest BIOS code/settings. This 
would generally be an operation requiring a reboot. We would ask our users to 
follow a procedure at their convenience to do so (within a window) and then we 
would force the change. An inventory of the current version would help to 
identify those who do not do the update and remind them.

c. A disk firmware issue is found. Similar to b) but there is also the 
possibility for partial completion where some disks correctly update but others 
not.

Overall, it would be great if we can find a way to allow self service hardware 
management where the end users can choose the right point to follow the 
firmware update process within a window and then we can force the upgrade if 
they do not do so.

Tim

-Original Message-
From: Julia Kreger 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, 30 March 2018 at 00:09
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [ironic] baremetal firmware lifecycle management

One of the topics that came up at during the Ironic sessions at the
Rocky PTG was firmware management.

During this discussion, we quickly reached the consensus that we
lacked the ability to discuss and reach a forward direction without:

* An understanding of capabilities and available vendor mechanisms
that can be used to consistently determine and assert desired firmware
to a baremetal node. Ideally, we could find a commonality of two or
more vendor mechanisms that can be abstracted cleanly into high level
actions. Ideally this would boil down to something a simple as
"list_firmware()" and "set_firmware()". Additionally there are surely
some caveats we need to understand, such as if the firmware update
must be done in a particular state, and if a particular prior
condition or next action is required for the particular update.

* An understanding of several use cases where a deployed node may need
to have specific firmware applied. We are presently aware of two
cases. The first being specific firmware is needed to match an
approved operational profile. The second being a desire to perform
ad-hoc changes or have new versions of firmware asserted while a node
has already been deployed.

Naturally any insight that can be shared will help the community to
best model the interaction so we can determine next steps and
ultimately implementation details.

-Julia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] OpenStack User Survey: Identity Service, Networking and Block Storage Drivers Answer Options

2018-03-29 Thread Tim Bell
Allison,

In the past, there has been some confusion on the ML2 driver since many of the 
drivers are both ML2 based and have specific drivers. Had you an approach in 
mind for this time?

It does mean that the results won’t be directly comparable but cleaning up this 
confusion would seem worth it in the longer term.

Tim

From: Allison Price 
Date: Thursday, 29 March 2018 at 19:24
To: openstack-operators 
Subject: [Openstack-operators] OpenStack User Survey: Identity Service, 
Networking and Block Storage Drivers Answer Options

Hi everyone,

We are opening the OpenStack User Survey submission process next month and 
wanted to collect operator feedback on the answer choices for three particular 
questions: Identity Service (Keystone) drivers, Network (Neutron) drivers and 
Block Storage (Cinder) drivers. We want to make sure that we have a list of the 
most commonly used drivers so that we can collect the appropriate data from 
OpenStack users. Each of the questions will have a free text “Other” option, so 
they don’t need to be comprehensive, but if you think that there is a driver 
that should be included, please reply on this email thread or contact me 
directly.

Thanks!
Allison


Allison Price
OpenStack Foundation
alli...@openstack.org


Which OpenStack Identity Service (Keystone) drivers are you using?
· Active Directory
· KVS
· LDAP
· PAM
· SQL (default)
· Templated
· Other

Which OpenStack Network (Neutron) drivers are you using?
· Cisco UCS / Nexus
· ML2 - Cisco APIC
· ML2 - Linux Bridge
· ML2 - Mellanox
· ML2 - MidoNet
· ML2 - OpenDaylight
· ML2 - Open vSwitch
· nova-network
· VMware NSX (formerly NIcira NVP)
· A10 Networks
· Arista
· Big Switch
· Brocade
· Embrace
· Extreme Networks
· Hyper-V
· IBM SDN-VE
· Linux Bridge
· Mellanox
· Meta PluginP
· MidoNet
· Modular Layer 2 Plugin (ML2)
· NEC OpenFlow
· OpenDaylight
· Nuage Networks
· One Convergence NVSD
· Tungsten Fabric (OpenContrail)
· Open vSwitch
· PLUMgrid
· Ruijie Networks
· Ryu OpenFlow Controller
· ML2 - Alcatel-Lucent Omniswitch
· ML2 - Arista
· ML2 - Big Switch
· ML2 - Brocade VDX/VCS
· ML2 - Calico
· ML2 - Cisco DFA
· ML2 - Cloudbase Hyper-V
· ML2 - Freescale SDN
· ML2 - Freescale FWaaS
· ML2 - Fujitsu Converged Fabric Switch
· ML2 - Huawei Agile Controller
· ML2 - Mellanox SR-IOV
· ML2 - Nuage Networks
· ML2 - One Convergence
· ML2 - ONOS
· ML2 - OpenFlow Agent
· ML2 - Pluribus
· ML2 - Fail-F
· ML2 - VMware DVS
· Other

Which OpenStack Block Storage (Cinder) drivers are you using?
· Ceph RBD
· Coraid
· Dell EqualLogic
· EMC
· GlusterFS
· HDS
· HP 3PAR
· HP LeftHand
· Huawei
· IBM GPFS
· IBM NAS
· IBM Storwize
· IBM XIV / DS8000
· LVM (default)
· Mellanox
· NetApp
· Nexenta
· NFS
· ProphetStor
· SAN / Solaris
· Scality
· Sheepdog
· SolidFire
· VMware VMDK
· Windows Server 2012
· Xenapi NFS
· XenAPI Storage Manager
· Zadara
· Other







___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack] Clock Drift

2018-03-25 Thread Tim Bell
Are you snapshotting the VMs? We’ve seen some delays while the VM is paused and 
being snapshotted and then there is too much time difference for NTP to catch 
up again…

Tim

From: Tyler Bishop 
Date: Saturday, 24 March 2018 at 18:37
To: Pablo Iranzo Gómez 
Cc: "openstack@lists.openstack.org" 
Subject: Re: [Openstack] Clock Drift

3 Sources.  CentOS NTP pool and 2 internal.

_
Tyler Bishop
EST 2007

[http://static.beyondhosting.net/email/logo-sig.jpg]

O: 513-299-7108 x1000
M: 513-646-5809
http://BeyondHosting.net


This email is intended only for the recipient(s) above and/or otherwise 
authorized personnel. The information contained herein and attached is 
confidential and the property of Beyond Hosting. Any unauthorized copying, 
forwarding, printing, and/or disclosing any information related to this email 
is prohibited. If you received this message in error, please contact the sender 
and destroy all copies of this email and any attachment(s).

On Fri, Mar 23, 2018 at 3:03 AM, Pablo Iranzo Gómez 
> wrote:
+++ Chris Friesen [22/03/18 16:22 -0600]:
On 03/21/2018 08:17 PM, Tyler Bishop wrote:
We've been fighting a constant clock skew issue lately on 4 of our clusters.
 They all use NTP but seem to go into WARN every 12 hours or so.

Anyone else experiencing this?

What clock are you using in the guest?


And how many NTPD sources?




Chris


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : 
openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

--

Pablo Iranzo Gómez (pablo.ira...@redhat.com)
  GnuPG: 0x5BD8E1E4
Senior Software Maintenance Engineer - OpenStack   iranzo @ IRC
RHC{A,SS,DS,VA,E,SA,SP,AOSP}, JBCAA#110-215-852RHCA Level V

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : 
openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack-operators] Ops Meetup, Co-Location options, and User Feedback

2018-03-20 Thread Tim Bell

Interesting debate, thanks for raising it.

Would we still need the same style of summit forum if we have the OpenStack 
Community Working Gathering? One thing I have found with the forum running all 
week throughout the summit is that it tends to draw audience away from other 
talks so maybe we could reduce the forum to only a subset of the summit time?

Would increasing the attendance level also lead to an increased entrance price 
compared to the PTG? I seem to remember the Ops meetup entrance price was 
nominal.

Getting the input from the OpenStack days would be very useful to get coverage. 
I've found them to be well organised community events with good balance between 
local companies and interesting talks.

Tim

-Original Message-
From: Jeremy Stanley 
Date: Tuesday, 20 March 2018 at 19:15
To: openstack-operators 
Subject: Re: [Openstack-operators] Ops Meetup, Co-Location options, and User 
Feedback

On 2018-03-20 10:37:21 -0500 (-0500), Jimmy McArthur wrote:
[...]
> We have an opportunity to co-locate the Ops Meetup at the PTG.
[...]

To echo what others have said so far, I'm wholeheartedly in favor of
this idea.

It's no secret I'm not a fan of the seemingly artificial schism in
our community between contributors who mostly write software and
contributors who mostly run software. There's not enough crossover
with the existing event silos, and I'd love to see increasing
opportunities for those of us who mostly write software to
collaborate more closely with those who mostly run software (and
vice versa). Having dedicated events and separate named identities
for these overlapping groups of people serves only to further divide
us, rather than bring us together where we can better draw on our
collective strengths to make something great.
-- 
Jeremy Stanley


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [nova] about rebuild instance booted from volume

2018-03-15 Thread Tim Bell
Deleting all snapshots would seem dangerous though...

1. I want to reset my instance to how it was before
2. I'll just do a snapshot in case I need any data in the future
3. rebuild
4. oops

Tim

-Original Message-
From: Ben Nemec 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 15 March 2018 at 20:42
To: Dan Smith 
Cc: "OpenStack Development Mailing List (not for usage questions)" 
, openstack-operators 

Subject: Re: [openstack-dev] [nova] about rebuild instance booted from volume



On 03/15/2018 09:46 AM, Dan Smith wrote:
>> Rather than overload delete_on_termination, could another flag like
>> delete_on_rebuild be added?
> 
> Isn't delete_on_termination already the field we want? To me, that field
> means "nova owns this". If that is true, then we should be able to
> re-image the volume (in-place is ideal, IMHO) and if not, we just
> fail. Is that reasonable?

If that's what the flag means then it seems reasonable.  I got the 
impression from the previous discussion that not everyone was seeing it 
that way though.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [nova] about rebuild instance booted from volume

2018-03-15 Thread Tim Bell
Deleting all snapshots would seem dangerous though...

1. I want to reset my instance to how it was before
2. I'll just do a snapshot in case I need any data in the future
3. rebuild
4. oops

Tim

-Original Message-
From: Ben Nemec 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 15 March 2018 at 20:42
To: Dan Smith 
Cc: "OpenStack Development Mailing List (not for usage questions)" 
, openstack-operators 

Subject: Re: [openstack-dev] [nova] about rebuild instance booted from volume



On 03/15/2018 09:46 AM, Dan Smith wrote:
>> Rather than overload delete_on_termination, could another flag like
>> delete_on_rebuild be added?
> 
> Isn't delete_on_termination already the field we want? To me, that field
> means "nova owns this". If that is true, then we should be able to
> re-image the volume (in-place is ideal, IMHO) and if not, we just
> fail. Is that reasonable?

If that's what the flag means then it seems reasonable.  I got the 
impression from the previous discussion that not everyone was seeing it 
that way though.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] How are you handling billing/chargeback?

2018-03-14 Thread Tim Bell
We’re using a combination of cASO (https://caso.readthedocs.io/en/stable/) and 
some low level libvirt fabric monitoring. The showback accounting reports are 
generated with merging with other compute/storage usage across various systems 
(HTCondor, SLURM, ...)

It would seem that those who needed solutions in the past found they had to do 
them themselves. It would be interesting if there are references of usage 
data/accounting/chargeback at scale with the current project set but doing the 
re-evaluation would be an effort which would need to be balanced versus just 
keeping the local solution working.

Tim

-Original Message-
From: Lars Kellogg-Stedman 
Date: Wednesday, 14 March 2018 at 17:15
To: openstack-operators 
Subject: Re: [Openstack-operators] How are you handling billing/chargeback?

On Mon, Mar 12, 2018 at 03:21:13PM -0400, Lars Kellogg-Stedman wrote:
> I'm curious what folks out there are using for chargeback/billing in
> your OpenStack environment.

So far it looks like everyone is using a homegrown solution.  Is
anyone using an existing product/project?

-- 
Lars Kellogg-Stedman  | larsks @ {irc,twitter,github}
http://blog.oddbit.com/|

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [nova] about rebuild instance booted from volume

2018-03-14 Thread Tim Bell
Matt,

To add another scenario and make things even more difficult (sorry (), if the 
original volume has snapshots, I don't think you can delete it.

Tim


-Original Message-
From: Matt Riedemann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 14 March 2018 at 14:55
To: "openstack-dev@lists.openstack.org" , 
openstack-operators 
Subject: Re: [openstack-dev] [nova] about rebuild instance booted from volume

On 3/14/2018 3:42 AM, 李杰 wrote:
> 
>  This is the spec about  rebuild a instance booted from 
> volume.In the spec,there is a
>question about if we should delete the old root_volume.Anyone who 
> is interested in
>booted from volume can help to review this. Any suggestion is 
> welcome.Thank you!
>The link is here.
>Re:the rebuild spec:https://review.openstack.org/#/c/532407/

Copying the operators list and giving some more context.

This spec is proposing to add support for rebuild with a new image for 
volume-backed servers, which today is just a 400 failure in the API 
since the compute doesn't support that scenario.

With the proposed solution, the backing root volume would be deleted and 
a new volume would be created from the new image, similar to how boot 
from volume works.

The question raised in the spec is whether or not nova should delete the 
root volume even if its delete_on_termination flag is set to False. The 
semantics get a bit weird here since that flag was not meant for this 
scenario, it's meant to be used when deleting the server to which the 
volume is attached. Rebuilding a server is not deleting it, but we would 
need to replace the root volume, so what do we do with the volume we're 
replacing?

Do we say that delete_on_termination only applies to deleting a server 
and not rebuild and therefore nova can delete the root volume during a 
rebuild?

If we don't delete the volume during rebuild, we could end up leaving a 
lot of volumes lying around that the user then has to clean up, 
otherwise they'll eventually go over quota.

We need user (and operator) feedback on this issue and what they would 
expect to happen.

-- 

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [openstack-dev] [nova] about rebuild instance booted from volume

2018-03-14 Thread Tim Bell
Matt,

To add another scenario and make things even more difficult (sorry (), if the 
original volume has snapshots, I don't think you can delete it.

Tim


-Original Message-
From: Matt Riedemann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 14 March 2018 at 14:55
To: "openstack-...@lists.openstack.org" , 
openstack-operators 
Subject: Re: [openstack-dev] [nova] about rebuild instance booted from volume

On 3/14/2018 3:42 AM, 李杰 wrote:
> 
>  This is the spec about  rebuild a instance booted from 
> volume.In the spec,there is a
>question about if we should delete the old root_volume.Anyone who 
> is interested in
>booted from volume can help to review this. Any suggestion is 
> welcome.Thank you!
>The link is here.
>Re:the rebuild spec:https://review.openstack.org/#/c/532407/

Copying the operators list and giving some more context.

This spec is proposing to add support for rebuild with a new image for 
volume-backed servers, which today is just a 400 failure in the API 
since the compute doesn't support that scenario.

With the proposed solution, the backing root volume would be deleted and 
a new volume would be created from the new image, similar to how boot 
from volume works.

The question raised in the spec is whether or not nova should delete the 
root volume even if its delete_on_termination flag is set to False. The 
semantics get a bit weird here since that flag was not meant for this 
scenario, it's meant to be used when deleting the server to which the 
volume is attached. Rebuilding a server is not deleting it, but we would 
need to replace the root volume, so what do we do with the volume we're 
replacing?

Do we say that delete_on_termination only applies to deleting a server 
and not rebuild and therefore nova can delete the root volume during a 
rebuild?

If we don't delete the volume during rebuild, we could end up leaving a 
lot of volumes lying around that the user then has to clean up, 
otherwise they'll eventually go over quota.

We need user (and operator) feedback on this issue and what they would 
expect to happen.

-- 

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [ironic] PTG Summary

2018-03-12 Thread Tim Bell
My worry with re-running the burn-in every time we do cleaning is for resource 
utilisation. When the machines are running the burn-in, they're not doing 
useful physics so I would want to minimise the number of times this is run over 
the life time of a machine.

It may be possible to do something like the burn in with a dedicated set of 
steps but still use the cleaning state machine.  

Having a cleaning step set (i.e. burn-in means 
cpuburn,memtest,badblocks,benchmark) would make it more friendly for the 
administrator. Similarly, retirement could be done with additional steps such 
as reset2factory.

Tim

-Original Message-
From: Dmitry Tantsur <dtant...@redhat.com>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Date: Monday, 12 March 2018 at 12:47
To: "openstack-dev@lists.openstack.org" <openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [ironic] PTG Summary

Hi Tim,

Thanks for the information.

I personally don't see problems with cleaning running weeks, when needed. 
What 
I'd avoid is replicating the same cleaning machinery but with a different 
name. 
I think we should try to make cleaning work for this case instead.

Dmitry
    
On 03/12/2018 12:33 PM, Tim Bell wrote:
> Julia,
> 
> A basic summary of CERN does burn-in is at 
http://openstack-in-production.blogspot.ch/2018/03/hardware-burn-in-in-cern-datacenter.html
> 
> Given that the burn in takes weeks to run, we'd see it as a different 
step to cleaning (with some parts in common such as firmware upgrades to latest 
levels)
> 
> Tim
> 
> -Original Message-
> From: Julia Kreger <juliaashleykre...@gmail.com>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
> Date: Thursday, 8 March 2018 at 22:10
> To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [ironic] PTG Summary
> 
> ...
>  Cleaning - Burn-in
>  
>  As part of discussing cleaning changes, we discussed supporting a
>  "burn-in" mode where hardware could be left to run load, memory, or
>  other tests for a period of time. We did not have consensus on a
>  generic solution, other than that this should likely involve
>  clean-steps that we already have, and maybe another entry point into
>  cleaning. Since we didn't really have consensus on use cases, we
>  decided the logical thing was to write them down, and then go from
>  there.
>  
>  Action Items:
>  * Community members to document varying burn-in use cases for
>  hardware, as they may vary based upon industry.
>  * Community to try and come up with a couple example clean-steps.
>  
>  
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] PTG Summary

2018-03-12 Thread Tim Bell
Julia,

A basic summary of CERN does burn-in is at 
http://openstack-in-production.blogspot.ch/2018/03/hardware-burn-in-in-cern-datacenter.html

Given that the burn in takes weeks to run, we'd see it as a different step to 
cleaning (with some parts in common such as firmware upgrades to latest levels)

Tim

-Original Message-
From: Julia Kreger 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 8 March 2018 at 22:10
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [ironic] PTG Summary

...
Cleaning - Burn-in

As part of discussing cleaning changes, we discussed supporting a
"burn-in" mode where hardware could be left to run load, memory, or
other tests for a period of time. We did not have consensus on a
generic solution, other than that this should likely involve
clean-steps that we already have, and maybe another entry point into
cleaning. Since we didn't really have consensus on use cases, we
decided the logical thing was to write them down, and then go from
there.

Action Items:
* Community members to document varying burn-in use cases for
hardware, as they may vary based upon industry.
* Community to try and come up with a couple example clean-steps.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pros and Cons of face-to-face meetings

2018-03-08 Thread Tim Bell
Fully agree with Doug. At CERN, we use video conferencing for 100s, sometimes 
>1000 participants for the LHC experiments, the trick we've found is to fully 
embrace the chat channels (so remote non-native English speakers can provide 
input) and chairs/vectors who can summarise the remote questions 
constructively, with appropriate priority.

This is actually very close to the etherpad approach, we benefit from the local 
bandwidth if available but do not exclude those who do not have it (or the 
language skills to do it in real time).

Tim

-Original Message-
From: Doug Hellmann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 8 March 2018 at 20:00
To: openstack-dev 
Subject: Re: [openstack-dev] Pros and Cons of face-to-face meetings

Excerpts from Jeremy Stanley's message of 2018-03-08 18:34:51 +:
> On 2018-03-08 12:16:18 -0600 (-0600), Jay S Bryant wrote:
> [...]
> > Cinder has been doing this for many years and it has worked
> > relatively well. It requires a good remote speaker and it also
> > requires the people in the room to be sensitive to the needs of
> > those who are remote. I.E. planning topics at a time appropriate
> > for the remote attendees, ensuring everyone speaks up, etc. If
> > everyone, however, works to be inclusive with remote participants
> > it works well.
> > 
> > We have even managed to make this work between separate mid-cycles
> > (Cinder and Nova) in the past before we did PTGs.
> [...]
> 
> I've seen it work okay when the number of remote participants is
> small and all are relatively known to the in-person participants.
> Even so, bridging Doug into the TC discussion at the PTG was
> challenging for all participants.

I agree, and I'll point out I was just across town (snowed in at a
different hotel).

The conversation the previous day with just the 5-6 people on the
release team worked a little bit better, but was still challenging
at times because of audio quality issues.

So, yes, this can be made to work. It's not trivial, though, and
the degree to which it works depends a lot on the participants on
both sides of the connection. I would not expect us to be very
productive with a large number of people trying to be active in the
conversation remotely.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [Openstack-sigs] [openstack-dev] [keystone] [oslo] new unified limit library

2018-03-07 Thread Tim Bell
I think nested quotas would give the same thing, i.e. you have a parent project 
for the group and child projects for the users. This would not need user/group 
quotas but continue with the ‘project owns resources’ approach.

It can be generalised to other use cases like the value add partner or the 
research experiment working groups 
(http://openstack-in-production.blogspot.fr/2017/07/nested-quota-models.html)

Tim

From: Zhipeng Huang <zhipengh...@gmail.com>
Reply-To: "openstack-s...@lists.openstack.org" 
<openstack-s...@lists.openstack.org>
Date: Wednesday, 7 March 2018 at 17:37
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-...@lists.openstack.org>, openstack-operators 
<openstack-operators@lists.openstack.org>, "openstack-s...@lists.openstack.org" 
<openstack-s...@lists.openstack.org>
Subject: Re: [Openstack-sigs] [openstack-dev] [keystone] [oslo] new unified 
limit library

This is certainly a feature will make Public Cloud providers very happy :)

On Thu, Mar 8, 2018 at 12:33 AM, Tim Bell 
<tim.b...@cern.ch<mailto:tim.b...@cern.ch>> wrote:
Sorry, I remember more detail now... it was using the 'owner' of the VM as part 
of the policy rather than quota.

Is there a per-user/per-group quota in Nova?

Tim

-Original Message-
From: Tim Bell <tim.b...@cern.ch<mailto:tim.b...@cern.ch>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-...@lists.openstack.org<mailto:openstack-...@lists.openstack.org>>
Date: Wednesday, 7 March 2018 at 17:29
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-...@lists.openstack.org<mailto:openstack-...@lists.openstack.org>>
Subject: Re: [openstack-dev] [keystone] [oslo] new unified limit library


There was discussion that Nova would deprecate the user quota feature since 
it really didn't fit well with the 'projects own resources' approach and was 
little used. At one point, some of the functionality stopped working and was 
repaired. The use case we had identified goes away if you have 2 level deep 
nested quotas (and we have now worked around it).

Tim
-Original Message-
From: Lance Bragstad <lbrags...@gmail.com<mailto:lbrags...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-...@lists.openstack.org<mailto:openstack-...@lists.openstack.org>>
Date: Wednesday, 7 March 2018 at 16:51
To: 
"openstack-...@lists.openstack.org<mailto:openstack-...@lists.openstack.org>" 
<openstack-...@lists.openstack.org<mailto:openstack-...@lists.openstack.org>>
Subject: Re: [openstack-dev] [keystone] [oslo] new unified limit library



On 03/07/2018 09:31 AM, Chris Friesen wrote:
> On 03/07/2018 08:58 AM, Lance Bragstad wrote:
>> Hi all,
>>
]
>
> 1) Nova currently supports quotas for a user/group tuple that can be
> stricter than the overall quotas for that group.  As far as I know no
> other project supports this.
...
I think the initial implementation of a unified limit pattern is
targeting limits and quotas for things associated to projects. In the
future, we can probably expand on the limit information in keystone to
include user-specific limits, which would be great if nova wants to move
away from handling that kind of stuff.
>
> Chris
>
> 
__
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com<mailto:huang

Re: [openstack-dev] [Openstack-sigs] [keystone] [oslo] new unified limit library

2018-03-07 Thread Tim Bell
I think nested quotas would give the same thing, i.e. you have a parent project 
for the group and child projects for the users. This would not need user/group 
quotas but continue with the ‘project owns resources’ approach.

It can be generalised to other use cases like the value add partner or the 
research experiment working groups 
(http://openstack-in-production.blogspot.fr/2017/07/nested-quota-models.html)

Tim

From: Zhipeng Huang <zhipengh...@gmail.com>
Reply-To: "openstack-s...@lists.openstack.org" 
<openstack-s...@lists.openstack.org>
Date: Wednesday, 7 March 2018 at 17:37
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>, openstack-operators 
<openstack-operat...@lists.openstack.org>, "openstack-s...@lists.openstack.org" 
<openstack-s...@lists.openstack.org>
Subject: Re: [Openstack-sigs] [openstack-dev] [keystone] [oslo] new unified 
limit library

This is certainly a feature will make Public Cloud providers very happy :)

On Thu, Mar 8, 2018 at 12:33 AM, Tim Bell 
<tim.b...@cern.ch<mailto:tim.b...@cern.ch>> wrote:
Sorry, I remember more detail now... it was using the 'owner' of the VM as part 
of the policy rather than quota.

Is there a per-user/per-group quota in Nova?

Tim

-Original Message-
From: Tim Bell <tim.b...@cern.ch<mailto:tim.b...@cern.ch>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, 7 March 2018 at 17:29
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [keystone] [oslo] new unified limit library


There was discussion that Nova would deprecate the user quota feature since 
it really didn't fit well with the 'projects own resources' approach and was 
little used. At one point, some of the functionality stopped working and was 
repaired. The use case we had identified goes away if you have 2 level deep 
nested quotas (and we have now worked around it).

Tim
-Original Message-
From: Lance Bragstad <lbrags...@gmail.com<mailto:lbrags...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, 7 March 2018 at 16:51
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [keystone] [oslo] new unified limit library



On 03/07/2018 09:31 AM, Chris Friesen wrote:
> On 03/07/2018 08:58 AM, Lance Bragstad wrote:
>> Hi all,
>>
]
>
> 1) Nova currently supports quotas for a user/group tuple that can be
> stricter than the overall quotas for that group.  As far as I know no
> other project supports this.
...
I think the initial implementation of a unified limit pattern is
targeting limits and quotas for things associated to projects. In the
future, we can probably expand on the limit information in keystone to
include user-specific limits, which would be great if nova wants to move
away from handling that kind of stuff.
>
> Chris
>
> 
__
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com<mailto:huang

Re: [openstack-dev] [keystone] [oslo] new unified limit library

2018-03-07 Thread Tim Bell
Sorry, I remember more detail now... it was using the 'owner' of the VM as part 
of the policy rather than quota.

Is there a per-user/per-group quota in Nova?

Tim

-Original Message-
From: Tim Bell <tim.b...@cern.ch>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Date: Wednesday, 7 March 2018 at 17:29
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [keystone] [oslo] new unified limit library


There was discussion that Nova would deprecate the user quota feature since 
it really didn't fit well with the 'projects own resources' approach and was 
little used. At one point, some of the functionality stopped working and was 
repaired. The use case we had identified goes away if you have 2 level deep 
nested quotas (and we have now worked around it). 

Tim
-Original Message-
From: Lance Bragstad <lbrags...@gmail.com>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Date: Wednesday, 7 March 2018 at 16:51
To: "openstack-dev@lists.openstack.org" <openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [keystone] [oslo] new unified limit library



On 03/07/2018 09:31 AM, Chris Friesen wrote:
> On 03/07/2018 08:58 AM, Lance Bragstad wrote:
>> Hi all,
>>
]
>
> 1) Nova currently supports quotas for a user/group tuple that can be
> stricter than the overall quotas for that group.  As far as I know no
> other project supports this.
...
I think the initial implementation of a unified limit pattern is
targeting limits and quotas for things associated to projects. In the
future, we can probably expand on the limit information in keystone to
include user-specific limits, which would be great if nova wants to move
away from handling that kind of stuff.
>
> Chris
>
> 
__
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [oslo] new unified limit library

2018-03-07 Thread Tim Bell

There was discussion that Nova would deprecate the user quota feature since it 
really didn't fit well with the 'projects own resources' approach and was 
little used. At one point, some of the functionality stopped working and was 
repaired. The use case we had identified goes away if you have 2 level deep 
nested quotas (and we have now worked around it). 

Tim
-Original Message-
From: Lance Bragstad 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 7 March 2018 at 16:51
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [keystone] [oslo] new unified limit library



On 03/07/2018 09:31 AM, Chris Friesen wrote:
> On 03/07/2018 08:58 AM, Lance Bragstad wrote:
>> Hi all,
>>
]
>
> 1) Nova currently supports quotas for a user/group tuple that can be
> stricter than the overall quotas for that group.  As far as I know no
> other project supports this.
...
I think the initial implementation of a unified limit pattern is
targeting limits and quotas for things associated to projects. In the
future, we can probably expand on the limit information in keystone to
include user-specific limits, which would be great if nova wants to move
away from handling that kind of stuff.
>
> Chris
>
> __
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Inverted drive letters on block devices that use virtio-scsi

2018-01-25 Thread Tim Bell
Labels can be one approach where you mount by disk label rather than device

Creating the volume with the label


# mkfs -t ext4 -L testvol /dev/vdb


/etc/fstab then contains

LABEL=testvol /mnt ext4 noatime,nodiratime,user_xattr0   0

You still need to be careful to not attach data disks at install time though 
but it addresses booting order problems.

Tim

From: Jean-Philippe Méthot 
Date: Friday, 26 January 2018 at 07:28
To: "Logan V." 
Cc: openstack-operators 
Subject: Re: [Openstack-operators] Inverted drive letters on block devices that 
use virtio-scsi

Yea, the configdrive is a non-issue for us since we don’t use those. The 
multi-drive issue is the only one really affecting us. While removing the 
second drive and reattaching it after boot is probably a good solution, I think 
it’s likely the issue will come back after a hard reboot or migration. Probably 
better to wait before I start converting my multi-disk instances to 
virtio-scsi. If I am not mistaken, this should also be an issue in Pike and 
master, right?

Jean-Philippe Méthot
Openstack system administrator
Administrateur système Openstack
PlanetHoster inc.





Le 26 janv. 2018 à 14:23, Logan V. 
> a écrit :

There is a small patch in the bug which resolves the config drive
ordering. Without that patch I don't know of any workaround. The
config drive will always end up first in the boot order and the
instance will always fail to boot in that situation.

For the multi-volume instances where the boot volume is out of order,
I don't know of any patch for that. One workaround is to detach any
secondary data volumes from the instance, and then reattach them after
booting from the one and only attached boot volume.

Logan

On Thu, Jan 25, 2018 at 10:21 PM, Jean-Philippe Méthot
> wrote:

Thank you, it indeed seems to be the same issue. I will be following this
bug report. A shame too, because we were waiting for the patch to allow us
to setup 2 drives on virtio-scsi before starting to make the change. In the
meantime, have you found a way to circumvent the issue? Could it be as easy
as changing the drive order in the database?


Jean-Philippe Méthot
Openstack system administrator
Administrateur système Openstack
PlanetHoster inc.




Le 26 janv. 2018 à 13:06, Logan V. 
> a écrit :

https://bugs.launchpad.net/nova/+bug/1729584


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Custom libvirt fragment for instance type?

2018-01-16 Thread Tim Bell
If you want to hide the VM signature, you can use the img_hide_hypervisor_id 
property 
(https://docs.openstack.org/python-glanceclient/latest/cli/property-keys.html)

Tim

-Original Message-
From: jon 
Date: Tuesday, 16 January 2018 at 21:14
To: openstack-operators 
Subject: [Openstack-operators] Custom libvirt fragment for instance type?

Hi All,

Looking for a way to inject:

 

  

 

into the libvirt.xml for instances of a particular flavor.

My needs could also be met by attatching it to the glance image or if
needs be per hypervisor.

My Googling is not turning up anything.  Is there any way to set
arbitray (or this particular) Libvirt/KVM freature?

Thanks,
-Jon

-- 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [all] [tc] Community Goals for Rocky

2018-01-12 Thread Tim Bell
I was reading a tweet from Jean-Daniel and wondering if there would be an 
appropriate community goal regarding support of some of the later API versions 
or whether this would be more of a per-project goal.

https://twitter.com/pilgrimstack/status/951860289141641217

Interesting numbers about customers tools used to talk to our @OpenStack APIs 
and the Keystone v3 compatibility:
- 10% are not KeystoneV3 compatible
- 16% are compatible
- for the rest, the tools documentation has no info

I think Keystone V3 and Glance V2 are the ones with APIs which have moved on 
significantly from the initial implementations and not all projects have been 
keeping up.

Tim

-Original Message-
From: Emilien Macchi 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, 12 January 2018 at 16:51
To: OpenStack Development Mailing List 
Subject: Re: [openstack-dev] [all] [tc] Community Goals for Rocky

Here's a quick update before the weekend:

2 goals were proposed to governance:

Remove mox
https://review.openstack.org/#/c/532361/
Champion: Sean McGinnis (unless someone else steps up)

Ensure pagination links
https://review.openstack.org/#/c/532627/
Champion: Monty Taylor

2 more goals are about to be proposed:

Enable mutable configuration
Champion: ChangBo Guo

Cold upgrades capabilities
Champion: Masayuki Igawa


Thanks everyone for your participation,
We hope to make a vote within the next 2 weeks so we can prepare the
PTG accordingly.

On Tue, Jan 9, 2018 at 10:37 AM, Emilien Macchi  wrote:
> As promised, let's continue the discussion and move things forward.
>
> This morning Thierry brought the discussion during the TC office hour
> (that I couldn't attend due to timezone):
> 
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/latest.log.html#t2018-01-09T09:18:33
>
> Some outputs:
>
> - One goal has been proposed so far.
>
> Right now, we only have one goal proposal: Storyboard Migration. There
> are some concerns about the ability to achieve this goal in 6 months.
> At that point, we think it would be great to postpone the goal to S
> cycle, continue the progress (kudos to Kendall) and fine other goals
> for Rocky.
>
>
> - We still have a good backlog of goals, we're just missing champions.
>
> https://etherpad.openstack.org/p/community-goals
>
> Chris brought up "pagination links in collection resources" in api-wg
> guidelines theme. He said in the past this goal was more a "should"
> than a "must".
> Thierry mentioned privsep migration (done in Nova and Zun). (action,
> ping mikal about it).
> Thierry also brought up the version discovery (proposed by Monty).
> Flavio proposed mutable configuration, which might be very useful for 
operators.
> He also mentioned that IPv6 support goal shouldn't be that far from
> done, but we're currently lacking in CI jobs that test IPv6
> deployments (question for infra/QA, can we maybe document the gap so
> we can run some gate jobs on ipv6 ?)
> (personal note on that one, since TripleO & Puppet OpenStack CI
> already have IPv6 jobs, we can indeed be confident that it shouldn't
> be that hard to complete this goal in 6 months, I guess the work needs
> to happen in the projects layouts).
> Another interesting goal proposed by Thierry, also useful for
> operators, is to move more projects to assert:supports-upgrade tag.
> Thierry said we are probably not that far from this goal, but the
> major lack is in testing.
> Finally, another "simple" goal is to remove mox/mox3 (Flavio said most
> of projects don't use it anymore already).
>
> With that said, let's continue the discussion on these goals, see
> which ones can be actionable and find champions.
>
> - Flavio asked how would it be perceived if one cycle wouldn't have at
> least one community goal.
>
> Thierry said we could introduce multi-cycle goals (Storyboard might be
> a good candidate).
> Chris and Thierry thought that it would be a bad sign for our
> community to not have community goals during a cycle, "loss of
> momentum" eventually.
>
>
> Thanks for reading so far,
>
> On Fri, Dec 15, 2017 at 9:07 AM, Emilien Macchi  
wrote:
>> On Tue, Nov 28, 2017 at 2:22 PM, Emilien Macchi  
wrote:
>> [...]
>>> Suggestions are welcome:
>>> - on the mailing-list, in a new thread per goal [all] [tc] Proposing
>>> goal XYZ for Rocky
>>> - on Gerrit in openstack/governance like Kendall did.
>>
>> Just a fresh reminder about Rocky goals.
>> A few questions that we can ask 

Re: [Openstack-operators] Converting existing instances from virtio-blk to virtio-scsi

2018-01-11 Thread Tim Bell
BTW, this is also an end user visible change as the VMs would see the disk move 
from /dev/vda to /dev/sda. Depending on how the VMs are configured, this may 
cause issues also for the end user.

Tim

From: Jean-Philippe Méthot 
Date: Thursday, 11 January 2018 at 08:37
To: openstack-operators 
Subject: [Openstack-operators] Converting existing instances from virtio-blk to 
virtio-scsi

Hi,

We currently have a private cloud running old instances using the virtio-blk 
driver and new instances using the virtio-scsi driver. We would like to convert 
all our existing instances to virtio-scsi but there doesn’t appear to be an 
official way to do this. Can I modify this in the openstack database? What 
parameters would I need to change? Is there an easier, less likely to break 
everything way?


Jean-Philippe Méthot
Openstack system administrator
Administrateur système Openstack
PlanetHoster inc.




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [all] Switching to longer development cycles

2017-12-13 Thread Tim Bell
The forums would seem to provide a good opportunity for get togethers during 
the release cycle. With these happening April/May and October/November, there 
could be a good chance for productive team discussions and the opportunities to 
interact with the user/operator community.

There is a risk that deployment to production is delayed, and therefore 
feedback is delayed and the wait for the ‘initial bug fixes before we deploy to 
prod’ gets longer.

If there is consensus, I’d suggest to get feedback from openstack-operators on 
the idea. My initial suspicion is that it would be welcomed, especially by 
those running from distros, but there are many different perspectives.

Tim

From: Amy Marrich 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 13 December 2017 at 18:58
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [all] Switching to longer development cycles

I think Sean has made some really good points with the PTG setting things off 
in the start of the year and conversations carrying over to the Forums and 
their importance. And having a gap at the end of the year as Jay mentioned will 
give time for those still about to do finishing work if needed and if it's 
planned for in the individual projects they can have an earlier 'end' to allow 
for members not being around.

The one year release would help to get 'new' users to adopt a more recent 
release, even if it's the one from the year previously as there is the 
'confidence' that it's been around for a bit and been used by others in 
production. And if projects want to do incrementals they can, if I've read the 
thread correctly. Also those that want the latest will just use master anyways 
as some do currently.

With the move to a yearly cycle I agree with the 1 year cycle for PTLs, though 
if needed perhaps a way to have a co-PTL or a LT could be implemented to help 
with the longer duties?

My 2 cents from the peanut gallery:)

Amy (spotz)

On Wed, Dec 13, 2017 at 11:29 AM, Sean McGinnis 
> wrote:
On Wed, Dec 13, 2017 at 05:16:35PM +, Chris Jones wrote:
> Hey
>
> On 13 December 2017 at 17:12, Jimmy McArthur 
> > wrote:
>
> > Thierry Carrez wrote:
> >
> >> - It doesn't mean that teams can only meet in-person once a year.
> >> Summits would still provide a venue for team members to have an
> >> in-person meeting. I also expect a revival of the team-organized
> >> midcycles to replace the second PTG for teams that need or want to meet
> >> more often.
> >>
> > The PTG seems to allow greater coordination between groups. I worry that
> > going back to an optional mid-cycle would reduce this cross-collaboration,
> > while also reducing project face-to-face time.
>
>
> I can't speak for the Foundation, but I would think it would be good to
> have an official PTG in the middle of the cycle (perhaps neatly aligned
> with some kind of milestone/event) that lets people discuss plans for
> finishing off the release, and early work they want to get started on for
> the subsequent release). The problem with team-organised midcycles (as I'm
> sure everyone remembers), is that there's little/no opportunity for
> cross-project work.
>
> --
> Cheers,
>
> Chris
This was one of my concerns initially too. We may have to see how things go and
course correct once we have a little more data to go on. But the thought (or at
least the hope) was that we could get by with using the one PTG early in the
cycle to get alignment, then though IRC, the mailing list, and the Forums (keep
in mind there will be two Forums within the cycle) we would be able to keep
things going and discuss any cross project concerns.

This may actually get more emphasis on developers attending the Forum. I think
that is one part of our PTG/Design Summit split that has not fully settled the
way we had hoped. The Forum is still encouraged for developers to attend. But I
think the reality has been many companies now just see the Summit as a
marketing event and see no reason to send any developers.

I can say from the last couple Forum experiences, a lot of really good
discussions have happened there. It's really been unfortunate that there were a
lot of key people missing from some of those discussions though. Personally, my
hope with making this change would mean that the likelihood of devs being able
to justify going to the Forum increases.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Hello all Magnum project info

2017-12-09 Thread Tim Bell
There was an update on Magnum at the OpenStack Sydney summit 
(https://www.openstack.org/videos/sydney-2017/magnum-project-update).

We’re using it at CERN based off the RDO distribution with over 100 different 
clusters. We’re running Kubernetes 1.8.

Documentation for the CERN user is at 
http://clouddocs.web.cern.ch/clouddocs/containers/index.html

Tim

-Original Message-
From: Remo Mattei 
Date: Friday, 8 December 2017 at 18:27
To: OpenStack Mailing List 
Subject: [Openstack] Hello all Magnum project info

Hello all,
I was wondering if anyone has any updates on the Magnum project. I am
working with a team to have that implemented into the OpenStack Ocata,
maybe Pike, but looks like the version offered is 1.5 where Kube is
already at 1.7 or 1.8. The other issue is the build image which does not
seem to work, we found an older version of the image that works but it's
already old!! What's the best way to build a new one? Also looks like
Magnum did not get tested with SSL, so when I cluster is created, and it
creates the symlink the cert from Horizon does not get into the cluster,
what will be the best way to get that fixed?

Thanks, any suggestions tips are appreciated.

Remo

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack-operators] CaaS with magnum

2017-11-22 Thread Tim Bell
We use Magnum at CERN to provide Kubernetes, Mesos and Docker Swarm on demand. 
We’re running over 100 clusters currently using Atomic.

More details at 
https://cds.cern.ch/record/2258301/files/openstack-france-magnum.pdf

Tim

From: Sergio Morales Acuña 
Date: Wednesday, 22 November 2017 at 01:01
To: openstack-operators 
Subject: [Openstack-operators] CaaS with magnum

Hi.

I'm using Openstack Ocata and trying Magnum.

I encountered a lot of problems but I been able to solved many of them.

Now I'm curious about your experience with Magnum. ¿Any success stories? ¿What 
about more recent versions of k8s (1.7 or 1.8)? ¿What driver is, in your 
opinion, better: Atomic or CoreOS? ¿Do I need to upgrade Magnum to follow K8S's 
crazy changes?

¿Any tips on the CaaS problem?¿It's Magnum Ocata too old for this world?

Cheers
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] It's time...

2017-10-04 Thread Tim Bell
Tom,

All the best for the future. I will happily share a beverage or two in Sydney, 
reflect on the early days and toast the growth of the community that you have 
been a major contributor to.

Tim

-Original Message-
From: Tom Fifield 
Date: Wednesday, 4 October 2017 at 16:25
To: openstack-operators 
Subject: [Openstack-operators] It's time...

Hi all,

Tom here, on a personal note.

It's quite fitting that this November our summit is in Australia :)

I'm hoping to see you there because after being part of 15 releases, and 
travelling the equivalent of a couple of round trips to the moon to 
witness OpenStack grow around the world, the timing is right for me to 
step down as your Community Manager.

We've had an incredible journey together, culminating in the healthy 
community we have today. Across more than 160 countries, users and 
developers collaborate to make clouds better for the work that matters. 
The diversity of use is staggering, and the scale of resources being run 
is quite significant. We did that :)


Behind the scenes, I've spent the past couple of months preparing to 
transition various tasks to other members of the Foundation staff. If 
you see a new name behind an openstack.org email address, please give 
them due attention and care - they're all great people. I'll be around 
through to year end to shepherd the process, so please ping me if you 
are worried about anything.

Always remember, you are what makes OpenStack. OpenStack changes and 
thrives based on how you feel and what work you do. It's been a 
privilege to share the journey with you.



So, my plan? After a decade of diligent effort in organisations 
euphemistically described as "minimally-staffed", I'm looking forward to 
taking a decent holiday. Though, if you have a challenge interesting 
enough to wrest someone from a tropical beach or a misty mountain top ... ;)


There are a lot of you out there to whom I remain indebted. Stay in 
touch to make sure your owed drinks make it to you!

+886 988 33 1200
t...@tomfifield.net
https://www.linkedin.com/in/tomfifield
https://twitter.com/TomFifield


Regards,



Tom

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [heat] Removal of CloudWatch api

2017-10-04 Thread Tim Bell

Rabi,

I’d suggest to review the proposal with the openstack-operators list who would 
be able to advise on potential impact for their end users.

Tim

From: Rabi Mishra 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 4 October 2017 at 12:50
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [heat] Removal of CloudWatch api

Hi All,

As discussed in the last meeting, here is the ML thead to gather more feedback 
on this.

Background:

Heat support for AWS CloudWatch compatible API (a very minimalistic 
implementation, primarily used for metric data collection for autoscaling, 
before the telemetry services in OpenStack), has been deprecated since Havana 
cycle (may be before that?).  We now have a global alias[1] for 
AWS::CloudWatch::Alarm to use OS::Aodh::Alarm instead.  However, the ability to 
push metrics to ceilometer via heat, using a pre-signed url for CloudWatch api 
endpoint, is still supported for backward compatibility. 
heat-cfntools/cfn-push-stats tool is mainly used from the instances/vms for 
this.

What we plan to do?

We think that CloudWatch api  and related code base has been in heat tree 
without any change for the sole reason above and possibly it's time to remove 
them completely. However, we may not have an alternate way to continue 
providing backward compatibility to users.

What would be the impact?

- Users using AWS::CloudWatch::Alarm and pushing metric data from instances 
using cfn-push-stats would not be able to do so. Templates with these would not 
work any more.

- AWS::ElasticLoadBalancing::LoadBalancer[2] resource which uses 
AWS::CloudWatch::Alarm and cfn-push-stats would not work anymore. We probably 
have to remove this resource too?

Though it seems like a big change, the general opinion is that there would not 
be many users still using them and hence very little risk in removing 
CloudWatch support completely this cycle.

If you think otherwise please let us know:)


[1] 
https://git.openstack.org/cgit/openstack/heat/tree/etc/heat/environment.d/default.yaml#n6
[2] 
https://git.openstack.org/cgit/openstack/heat/tree/heat/engine/resources/aws/lb/loadbalancer.py#n640

Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [nova] Should we allow passing new user_data during rebuild?

2017-10-03 Thread Tim Bell
We use rebuild when reverting with snapshots. Keeping the same IP and hostname 
avoids some issues with Active Directory and Kerberos.

Tim

-Original Message-
From: Clint Byrum 
Date: Tuesday, 3 October 2017 at 19:17
To: openstack-operators 
Subject: Re: [Openstack-operators] [nova] Should we allow passing new   
user_data during rebuild?


Excerpts from Matt Riedemann's message of 2017-10-03 10:53:44 -0500:
> We plan on deprecating personality files from the compute API in a new 
> microversion. The spec for that is here:
> 
> https://review.openstack.org/#/c/509013/
> 
> Today you can pass new personality files to inject during rebuild, and 
> at the PTG we said we'd allow passing new user_data to rebuild as a 
> replacement for the personality files.
> 
> However, if the only reason one would need to pass personality files 
> during rebuild is because we don't persist them during the initial 
> server create, do we really need to also allow passing user_data for 
> rebuild? The initial user_data is stored with the instance during 
> create, and re-used during rebuild, so do we need to allow updating it 
> during rebuild?
> 

My personal opinion is that rebuild is an anti-pattern for cloud, and
should be frozen and deprecated. It does nothing but complicate Nova
and present challenges for scaling.

That said, if it must stay as a feature, I don't think updating the
user_data should be a priority. At that point, you've basically created an
entirely new server, and you can already do that by creating an entirely
new server.

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [tc][docs][release] Updating the PTI for docs and tarballs

2017-09-30 Thread Tim Bell
Having a PDF (or similar offline copy) was also requested during OpenStack UK 
days event during the executive Q with jbryce.

Tim

-Original Message-
From: Doug Hellmann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Saturday, 30 September 2017 at 17:44
To: openstack-dev 
Subject: Re: [openstack-dev] [tc][docs][release] Updating the PTI for docs  
and tarballs

Excerpts from Monty Taylor's message of 2017-09-30 10:20:08 -0500:
> Hey everybody,
> 
> Oh goodie, I can hear you say, let's definitely spend some time 
> bikeshedding about specific command invocations related to building docs 
> and tarballs!!!
> 
> tl;dr I want to change the PTI for docs and tarball building to be less 
> OpenStack-specific
> 
> The Problem
> ===
> 
> As we work on Zuul v3, there are a set of job definitions that are 
> rather fundamental that can totally be directly shared between Zuul 
> installations whether those Zuuls are working with OpenStack content or 
> not. As an example "tox -epy27" is a fairly standard thing, so a Zuul 
> job called "tox-py27" has no qualities specific to OpenStack and could 
> realistically be used by anyone who uses tox to manage their project.
> 
> Docs and Tarballs builds for us, however, are the following:
> 
> tox -evenv -- python setup.py sdist
> tox -evenv -- python setup.py build_sphinx
> 
> Neither of those are things that are likely to work outside of 
> OpenStack. (The 'venv' tox environment is not a default tox thing)
> 
> I'm going to argue neither of them are actually providing us with much 
> value.
> 
> Tarball Creation
> 
> 
> Tarball creation is super simple. setup_requires is already handled out 
> of band of everything else. Go clone nova into a completely empty system 
> and run python setup.py sdist ... and it works. (actually, nova is big. 
> use something smaller like gertty ...)
> 
> docker run -it --rm python bash -c 'git clone \
>   https://git.openstack.org/openstack/gertty && cd gertty \
>   && python setup.py sdist'
> 
> There is not much value in that tox wrapper - and it's not like it's 
> making it EASIER to run the command. In fact, it's more typing.
> 
> I propose we change the PTI from:
> 
>tox -evenv python setup.py sdist
> 
> to:
> 
>python setup.py sdist
> 
> and then change the gate jobs to use the non-tox form of the command.
> 
> I'd also like to further change it to be explicit that we also build 
> wheels. So the ACTUAL commands that the project should support are:
> 
>python setup.py sdist
>python setup.py bdist_wheel
> 
> All of our projects support this already, so this should be a no-op.
> 
> Notes:
> 
> * Python projects that need to build C extensions might need their pip 
> requirements (and bindep requirements) installed in order to run 
> bdist_wheel. We do not support that broadly at the moment ANYWAY - so 
> I'd like to leave that as an outlier and handle it when we need to 
> handle it.
> 
> * It's *possible* that somewhere we have a repo that has somehow done 
> something that would cause python setup.py sdist or python setup.py 
> bdist_wheel to not work without pip requirements installed. I believe we 
> should consider that a bug and fix it in the project if we find such a 
> thing - but since we use pbr in all of the OpenStack projects, I find it 
> extremely unlikely.
> 
> Governance patch submitted: https://review.openstack.org/508693
> 
> Sphinx Documentation
> 
> 
> Doc builds are more complex - but I think there is a high amount of 
> value in changing how we invoke them for a few reasons.
> 
> a) nobody uses 'tox -evenv -- python setup.py build_sphinx' but us
> b) we decided to use sphinx for go and javascript - but we invoke sphinx 
> differently for each of those (since they naturally don't have tox), 
> meaning we can't just have a "build-sphinx-docs" job and even share it 
> with ourselves.
> c) readthedocs.org is an excellent Open Source site that builds and 
> hosts sphinx docs for projects. They have an interface for docs 
> requirements documented and defined that we can align. By aligning, 
> projects can use migrate between docs.o.o and readthedocs.org and still 
> have a consistent experience.
> 
> The PTI I'd like to propose for this is more complex, so I'd like to 
> describe it in terms of:
> 
> - OpenStack organizational requirements
> - helper sugar for developers with per-language recommendations
> 

Re: [openstack-dev] [tc][masakari] new project teams application for Masakari

2017-09-01 Thread Tim Bell
Great to see efforts for this use case.

Is there community convergence that Masakari is the solution to address VMs 
high availability?

Tim

-Original Message-
From: Sam P 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, 1 September 2017 at 19:27
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [tc][masakari] new project teams application for   
Masakari

Hi All,

I have just proposed inclusion of Masakari[1] (Instances High Availability
Service) into list of official OpenStack projects in [2]. Regarding this
proposal, I would like to ask OpenStack community for what else can be 
improved
in the project to meet all the necessary requirements.

And I would like use this thread to extend the discussion about project
masakari. It would be great if you can post your comments/questions in [2] 
or in
this thread. I would be happy to discuss and answer to your questions.

I will be at PTG in Denver from 9/12 (Tuesday) to 9/14(Thursday). Other 
Masakari
team members also will be there at PTG. We are happy to discuss anything
regarding to Masakari in PTG.
Please contact us via freenode IRC @ #openstack-masakari, or openstack-dev 
ML
with prefix [masakari].

Thank you.

[1] https://wiki.openstack.org/wiki/Masakari
[2] https://review.openstack.org/#/c/500118/

--- Regards,
Sampath

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][tc][all] Trove restart - next steps

2017-08-16 Thread Tim Bell

Thanks for the info.

Can you give a summary for reasons for why this was not a viable approach?

Tim

From: Amrith Kumar <amrith.ku...@gmail.com>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Date: Tuesday, 15 August 2017 at 23:09
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [trove][tc][all] Trove restart - next steps

Tim,
This is an idea that was discussed at a trove midcycle a long time back (Juno 
midcycle, 2014). It came up briefly in the Kilo midcycle as well but was 
quickly rejected again.
I've added it to the list of topics for discussion at the PTG. If others want 
to add topics to that list, the etherpad is at 
https://etherpad.openstack.org/p/trove-queens-ptg​

Thanks!

-amrith


On Tue, Aug 15, 2017 at 12:43 PM, Tim Bell 
<tim.b...@cern.ch<mailto:tim.b...@cern.ch>> wrote:
One idea I found interesting from the past discussion was the approach that the 
user need is a database with a connection string.

How feasible is the approach where we are provisioning access to a multi-tenant 
database infrastructure rather than deploying a VM with storage and installing 
a database?

This would make the service delivery (monitoring, backup, upgrades) in the 
responsibility of the cloud provider rather than the end user. Some 
quota/telemetry would be needed to allocate costs to the project.

Tim

From: Amrith Kumar <amrith.ku...@gmail.com<mailto:amrith.ku...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, 15 August 2017 at 17:44
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [trove][tc][all] Trove restart - next steps

Now that we have successfully navigated the Pike release and branched
the tree, I would like to restart the conversation about how to revive
and restart the Trove project.

Feedback from the last go around on this subject[1] resulted in a
lively discussion which I summarized in [2]. The very quick summary is
this, there is interest in Trove, there is a strong desire to maintain
a migration path, there is much that remains to be done to get there.

What didn't come out of the email discussion was any concrete and
tangible uptick in the participation in the project, promises
notwithstanding.

There have however been some new contributors who have been submitting
patches and to help channel their efforts, and any additional
assistance that we may receive, I have created the (below) list of
priorities for the project. These will also be the subject of
discussion at the PTG in Denver.

   - Fix the gate

   - Update currently failing jobs, create xenial based images
   - Fix gate jobs that have gone stale (non-voting, no one paying
 attention)

   - Bug triage

   - Bugs in launchpad are really out of date, assignments to
 people who are no longer active, bugs that are really support
 requests, etc.,
   - Prioritize fixes for Queens and beyond

   - Get more active reviewers

   - There seems to still be a belief that 'contributing' means
 'fixing bugs'. There is much more value in actually doing
 reviews.
   - Get at least a three member active core review team by the
 end of the year.

   - Complete Python 3 support

  - Currently not complete; especially on the guest side

   - Community Goal, migrate to oslo.policy

   - Anything related to new features

This is clearly an opinionated list, and is open to change but I'd
like to do that based on the Agile 'stand up' meeting rules. You know, the 
chicken and pigs thing :)

So, if you'd like to get on board, offer suggestions to change this
list, and then go on to actually implement those changes, c'mon over.
-amrith



[1] http://openstack.markmail.org/thread/wokk73ecv44ipfjz
[2] http://markmail.org/message/gfqext34xh5y37ir

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[Openstack-operators] Pacemaker / Corosync in guests on OpenStack

2017-08-16 Thread Tim Bell

Has anyone had experience setting up a cluster of VM guests running Pacemaker / 
Corosync? Any recommendations?

Tim

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [trove][tc][all] Trove restart - next steps

2017-08-15 Thread Tim Bell
One idea I found interesting from the past discussion was the approach that the 
user need is a database with a connection string.

How feasible is the approach where we are provisioning access to a multi-tenant 
database infrastructure rather than deploying a VM with storage and installing 
a database?

This would make the service delivery (monitoring, backup, upgrades) in the 
responsibility of the cloud provider rather than the end user. Some 
quota/telemetry would be needed to allocate costs to the project.

Tim

From: Amrith Kumar 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, 15 August 2017 at 17:44
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [trove][tc][all] Trove restart - next steps

Now that we have successfully navigated the Pike release and branched
the tree, I would like to restart the conversation about how to revive
and restart the Trove project.

Feedback from the last go around on this subject[1] resulted in a
lively discussion which I summarized in [2]. The very quick summary is
this, there is interest in Trove, there is a strong desire to maintain
a migration path, there is much that remains to be done to get there.

What didn't come out of the email discussion was any concrete and
tangible uptick in the participation in the project, promises
notwithstanding.

There have however been some new contributors who have been submitting
patches and to help channel their efforts, and any additional
assistance that we may receive, I have created the (below) list of
priorities for the project. These will also be the subject of
discussion at the PTG in Denver.

   - Fix the gate

   - Update currently failing jobs, create xenial based images
   - Fix gate jobs that have gone stale (non-voting, no one paying
 attention)

   - Bug triage

   - Bugs in launchpad are really out of date, assignments to
 people who are no longer active, bugs that are really support
 requests, etc.,
   - Prioritize fixes for Queens and beyond

   - Get more active reviewers

   - There seems to still be a belief that 'contributing' means
 'fixing bugs'. There is much more value in actually doing
 reviews.
   - Get at least a three member active core review team by the
 end of the year.

   - Complete Python 3 support

  - Currently not complete; especially on the guest side

   - Community Goal, migrate to oslo.policy

   - Anything related to new features

This is clearly an opinionated list, and is open to change but I'd
like to do that based on the Agile 'stand up' meeting rules. You know, the 
chicken and pigs thing :)

So, if you'd like to get on board, offer suggestions to change this
list, and then go on to actually implement those changes, c'mon over.
-amrith



[1] http://openstack.markmail.org/thread/wokk73ecv44ipfjz
[2] http://markmail.org/message/gfqext34xh5y37ir

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [openstack-dev] [QA][LCOO] MEX-ops-meetup: OpenStack Extreme Testing

2017-08-14 Thread Tim Bell
+1 for Boris’ suggestion. Many of us use Rally to probe our clouds and have 
significant tooling behind it to integrate with local availability reporting 
and trouble ticketing systems. It would be much easier to deploy new 
functionality such as you propose if it was integrated into an existing project 
framework (such as Rally).

Tim

From: Boris Pavlovic 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Monday, 14 August 2017 at 12:57
To: "OpenStack Development Mailing List (not for usage questions)" 

Cc: openstack-operators 
Subject: Re: [openstack-dev] [QA][LCOO] MEX-ops-meetup: OpenStack Extreme 
Testing

Sam,

Seems like a good plan and huge topic ;)

I would as well suggest to take a look at the similar efforts in OpenStack:
- Failure injection: https://github.com/openstack/os-faults
- Rally Hooks Mechanism (to inject in rally scenarios failures): 
https://rally.readthedocs.io/en/latest/plugins/implementation/hook_and_trigger_plugins.html


Best regards,
Boris Pavlovic


On Mon, Aug 14, 2017 at 2:35 AM, Sam P 
> wrote:
Hi All,

This is a follow up for OpenStack Extreme Testing session[1]
we did in MEX-ops-meetup.

Quick intro for those who were not there:
In this work, we proposed to add new testing framework for openstack.
This framework will provides tool for create tests with destructive
scenarios which will check for High Availability, failover and
recovery of OpenStack cloud.
Please refer the link on top of the [1] for further details.

Follow up:
We are planning periodic irc meeting and have an irc
channel for discussion. I will get back to you with those details soon.

At that session, we did not have time to discuss last 3 items,
Reference architectures
 We are discussing about the reference architecture in [2].

What sort of failures do you see today in your environment?
 Currently we are considering, service failures, backend services (mq,
DB, etc.) failures,
 Network sw failures..etc. To begin with the implementation, we are
considering to start with
 service failures. Please let us know what failures are more frequent
in your environment.

Emulation/Simulation mechanisms, etc.
 Rather than doing actual scale, load, or performance tests, we are
thinking to build a emulation/simulation mechanism
to get the predictions or result of how will openstack behave on such
situations.
This interesting idea was proposed by the Gautam and need more
discussion on this.

Please let us know you questions or comments.

Request to Mike Perez:
 We discussed about synergies with openstack assertion tags and other
efforts to do similar testing in openstack.
 Could you please give some info or pointer of previous discussions.

[1] https://etherpad.openstack.org/p/MEX-ops-extreme-testing
[2] 
https://openstack-lcoo.atlassian.net/wiki/spaces/LCOO/pages/15477787/Extreme+Testing-Vision+Arch

--- Regards,
Sampath

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [QA][LCOO] MEX-ops-meetup: OpenStack Extreme Testing

2017-08-14 Thread Tim Bell
+1 for Boris’ suggestion. Many of us use Rally to probe our clouds and have 
significant tooling behind it to integrate with local availability reporting 
and trouble ticketing systems. It would be much easier to deploy new 
functionality such as you propose if it was integrated into an existing project 
framework (such as Rally).

Tim

From: Boris Pavlovic 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Monday, 14 August 2017 at 12:57
To: "OpenStack Development Mailing List (not for usage questions)" 

Cc: openstack-operators 
Subject: Re: [openstack-dev] [QA][LCOO] MEX-ops-meetup: OpenStack Extreme 
Testing

Sam,

Seems like a good plan and huge topic ;)

I would as well suggest to take a look at the similar efforts in OpenStack:
- Failure injection: https://github.com/openstack/os-faults
- Rally Hooks Mechanism (to inject in rally scenarios failures): 
https://rally.readthedocs.io/en/latest/plugins/implementation/hook_and_trigger_plugins.html


Best regards,
Boris Pavlovic


On Mon, Aug 14, 2017 at 2:35 AM, Sam P 
> wrote:
Hi All,

This is a follow up for OpenStack Extreme Testing session[1]
we did in MEX-ops-meetup.

Quick intro for those who were not there:
In this work, we proposed to add new testing framework for openstack.
This framework will provides tool for create tests with destructive
scenarios which will check for High Availability, failover and
recovery of OpenStack cloud.
Please refer the link on top of the [1] for further details.

Follow up:
We are planning periodic irc meeting and have an irc
channel for discussion. I will get back to you with those details soon.

At that session, we did not have time to discuss last 3 items,
Reference architectures
 We are discussing about the reference architecture in [2].

What sort of failures do you see today in your environment?
 Currently we are considering, service failures, backend services (mq,
DB, etc.) failures,
 Network sw failures..etc. To begin with the implementation, we are
considering to start with
 service failures. Please let us know what failures are more frequent
in your environment.

Emulation/Simulation mechanisms, etc.
 Rather than doing actual scale, load, or performance tests, we are
thinking to build a emulation/simulation mechanism
to get the predictions or result of how will openstack behave on such
situations.
This interesting idea was proposed by the Gautam and need more
discussion on this.

Please let us know you questions or comments.

Request to Mike Perez:
 We discussed about synergies with openstack assertion tags and other
efforts to do similar testing in openstack.
 Could you please give some info or pointer of previous discussions.

[1] https://etherpad.openstack.org/p/MEX-ops-extreme-testing
[2] 
https://openstack-lcoo.atlassian.net/wiki/spaces/LCOO/pages/15477787/Extreme+Testing-Vision+Arch

--- Regards,
Sampath

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] How to deal with confusion around "hosted projects"

2017-06-29 Thread Tim Bell

> On 29 Jun 2017, at 17:35, Chris Friesen  wrote:
> 
> On 06/29/2017 09:23 AM, Monty Taylor wrote:
> 
>> We are already WELL past where we can solve the problem you are describing.
>> Pandora's box has been opened - we have defined ourselves as an Open 
>> community.
>> Our only requirement to be official is that you behave as one of us. There is
>> nothing stopping those machine learning projects from becoming official. If 
>> they
>> did become official but were still bad software - what would we have solved?
>> 
>> We have a long-time official project that currently has staffing problems. If
>> someone Googles for OpenStack DBaaS and finds Trove and then looks to see 
>> that
>> the contribution rate has fallen off recently they could get the impression 
>> that
>> OpenStack is a bunch of dead crap.
>> 
>> Inclusion as an Official Project in OpenStack is not an indication that 
>> anyone
>> thinks the project is good quality. That's a decision we actively made. This 
>> is
>> the result.
> 
> I wonder if it would be useful to have a separate orthogonal status as to 
> "level of stability/usefulness/maturity/quality" to help newcomers weed out 
> projects that are under TC governance but are not ready for prime time.
> 

There is certainly a concern on the operator community as to how viable/useful 
a project is (and how to determine this). Adopting too early makes for a very 
difficult discussion with cloud users who rely on the function. 

Can an ‘official’ project be deprecated? The economics say yes. The consumer 
confidence impact would be substantial.

However, home grown solutions where there is common interest implies technical 
debt in the long term.

Tim

> Chris
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] OpenStack User Survey Now Open

2017-06-28 Thread Tim Bell
Allison,

Great to see in so many languages (although an incorrect flag seems to have 
been used for entering English ☺

When I get to the deployments, I’m registered with 0 currently. In the past, 
there was some ‘carry forward’ from previous surveys. I’m fine to put the data 
in again if this has been needed for data model changes but was not sure if it 
was a deliberate decision or a bug. I’ve tried with Chrome and Safari and got 
the same results.

Tim

From: Allison Price 
Date: Monday, 26 June 2017 at 23:44
To: openstack-operators 
Subject: [Openstack-operators] OpenStack User Survey Now Open

Hi everyone,

If you’re running OpenStack, please participate in the OpenStack User 
Survey. If you have already completed the 
survey before, you can simply login to update your deployment details. Please 
note that if your survey response has not been updated in 12 months, it will 
expire, so we encourage you to take this time to update your existing profile 
so your deployment can be included in the upcoming analysis.

As a member of our community, please help us spread the word. We're trying to 
gather as much real-world deployment data as possible to share back with you. 
We have made it easier to complete, and the survey is now available in 7 
languages—English, German, Indonesian, Japanese, Korean, traditional Chinese 
and simplified Chinese.

The information provided is confidential and will only be presented in 
aggregate unless you consent to making it public.

The deadline to complete the survey and be part of the next report is Friday, 
August 11 at 23:59 UTC.


· You can login and complete the OpenStack User Survey here: 
http://www.openstack.org/user-survey
· If you’re interested in joining the OpenStack User Survey Working 
Group to help with the survey analysis, please complete this form: 
https://openstackfoundation.formstack.com/forms/user_survey_working_group
· Help us promote the User Survey: 
https://twitter.com/OpenStack/status/879434563134652416

Please let me know if you have any questions.

Cheers,
Allison


Allison Price
OpenStack Foundation
alli...@openstack.org

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-15 Thread Tim Bell
And since Electrons are neither waves or particles, it is difficult to pin them 
down (

https://en.wikipedia.org/wiki/Wave%E2%80%93particle_duality

Tim

-Original Message-
From: Sean McGinnis <sean.mcgin...@gmx.com>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Date: Thursday, 15 June 2017 at 18:36
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [all][tc] Moving away from "big tent"  
terminology

    On Thu, Jun 15, 2017 at 03:41:30PM +, Tim Bell wrote:
> OpenStack Nucleus and OpenStack Electrons?
> 
> Tim
> 

Hah, love it!


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-15 Thread Tim Bell
OpenStack Nucleus and OpenStack Electrons?

Tim

-Original Message-
From: Thierry Carrez 
Organization: OpenStack
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 15 June 2017 at 14:57
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [all][tc] Moving away from "big tent"  
terminology

Sean Dague wrote:
> [...]
> I think those are all fine. The other term that popped into my head was
> "Friends of OpenStack" as a way to describe the openstack-hosted efforts
> that aren't official projects. It may be too informal, but I do think
> the OpenStack-Hosted vs. OpenStack might still mix up in people's head.

My original thinking was to call them "hosted projects" or "host
projects", but then it felt a bit incomplete. I kinda like the "Friends
of OpenStack" name, although it seems to imply some kind of vetting that
we don't actually do.

An alternative would be to give "the OpenStack project infrastructure"
some kind of a brand name (say, "Opium", for OpenStack project
infrastructure ultimate madness) and then call the hosted projects
"Opium projects". Rename the Infra team to Opium team, and voilà!

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] revised Postgresql deprecation patch for governance

2017-05-23 Thread Tim Bell
Thanks. It’s more of a question of not leaving people high and dry when they 
have made a reasonable choice in the past based on the choices supported at the 
time.

Tim

On 23.05.17, 21:14, "Sean Dague" <s...@dague.net> wrote:

On 05/23/2017 02:35 PM, Tim Bell wrote:
> Is there a proposal where deployments who chose Postgres on good faith 
can find migration path to a MySQL based solution?

Yes, a migration tool exploration is action #2 in the current proposal.

Also, to be clear, we're not at the stage of removing anything at this
point. We're mostly just signaling to people where the nice paved road
is, and where the gravel road is. It's like the signs in the spring
 on the road where frost heaves are (at least in the North East US).

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] revised Postgresql deprecation patch for governance

2017-05-23 Thread Tim Bell
Is there a proposal where deployments who chose Postgres on good faith can find 
migration path to a MySQL based solution?

Tim

On 23.05.17, 18:35, "Octave J. Orgeron"  wrote:

As OpenStack has evolved and grown, we are ending up with more and more 
MySQL-isms in the code. I'd love to see OpenStack support every database 
out there, but that is becoming more and more difficult. I've tried to 
get OpenStack to work with other databases like Oracle DB, MongoDB, 
TimesTen, NoSQL, and I can tell you that first hand it's not doable 
without making some significant changes. Some services would be easy to 
make more database agnostic, but most would require a lot of reworking. 
I think the pragmatic thing is to do is focus on supporting the MySQL 
dialect with the different engines and clustering technologies that have 
emerged. oslo_db is a great abstraction layer.  We should continue to 
build upon that and make sure that every OpenStack service uses it 
end-to-end. I've already seen plenty of cases where services like 
Barbican and Murano are not using it. I've also seen plenty of use cases 
where core services are using the older methods of connecting to the 
database and re-inventing the wheel to deal with things like retries. 
The more we use oslo_db and make sure that people are consistent with 
it's use and best practices, we better off we'll be in the long-run.

On the topic of doing live upgrades. I think it's a "nice to have" 
feature, but again we need a consistent framework that all services will 
follow. It's already complicated enough with how different services deal 
with parallelism and locking. So if we are going to go down this path 
across even the core services, we need to have a solid solution and 
framework. Otherwise, we'll end up with a hodgepodge of maturity levels 
between services. The expectation from operators is that if you say you 
can do live upgrades, they will expect that to be the case across all of 
OpenStack and not a buffet style feature. We would also have to take 
into consideration larger shops that have more distributed and 
scaled-out control planes. So we need be careful on this as it will have 
a wide impact on development, testing, and operating.

Octave


On 5/23/2017 6:00 AM, Sean Dague wrote:
> On 05/22/2017 11:26 PM, Matt Riedemann wrote:
>> On 5/22/2017 10:58 AM, Sean Dague wrote:
>>> I think these are actually compatible concerns. The current proposal to
>>> me actually tries to address A1 & B1, with a hint about why A2 is
>>> valuable and we would want to do that.
>>>
>>> It feels like there would be a valuable follow on in which A2 & B2 were
>>> addressed which is basically "progressive enhancements can be allowed to
>>> only work with MySQL based backends". Which is the bit that Monty has
>>> been pushing for in other threads.
>>>
>>> This feels like what a Tier 2 support looks like. A basic SQLA and pray
>>> so that if you live behind SQLA you are probably fine (though not
>>> tested), and then test and advanced feature roll out on a single
>>> platform. Any of that work might port to other platforms over time, but
>>> we don't want to make that table stakes for enhancements.
>> I think this is reasonable and is what I've been hoping for as a result
>> of the feedback on this.
>>
>> I think it's totally fine to say tier 1 backends get shiny new features.
>> I mean, hell, compare the libvirt driver in nova to all other virt
>> drivers in nova. New features are written for the libvirt driver and we
>> have to strong-arm them into other drivers for a compatibility story.
>>
>> I think we should turn on postgresql as a backend in one of the CI jobs,
>> as I've noted in the governance change - it could be the nova-next
>> non-voting job which only runs on nova, but we should have something
>> testing this as long as it's around, especially given how easy it is to
>> turn this on in upstream CI (it's flipping a devstack variable).
> Postgresql support shouldn't be in devstack. If we're taking a tier 2
> approach, someone needs to carve out database plugins from devstack and
> pg would be one (as could be galera, etc).
>
> This historical artifact that pg was maintained in devstack, but much
> more widely used backends were not, is part of the issue.
>
> It would also be a good unit test case as to whether there are pg
> focused folks around out there willing to do this basic devstack plugin
> / job setup work.
>
>   -Sean
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [Openstack-operators] DB deadlocks due to connection string

2017-05-23 Thread Tim Bell
One scenario would be to change the default and allow the exceptions to opt out 
(e.g. mysql-pymysql (

Tim

On 23.05.17, 19:08, "Matt Riedemann"  wrote:

On 5/23/2017 11:38 AM, Sean McGinnis wrote:
>>
>> This sounds like something we could fix completely by dropping the
>> use of the offending library. I know there was a lot of work done
>> to get pymysql support in place. It seems like we can finish that by
>> removing support for the old library and redirecting mysql://
>> connections to use pymysql.
>>
>> Doug
>>
> 
> I think that may be ideal. If there are known issues with the library,
> and we have a different and well tested alternative that we know works,
> it's probably easier all around to just redirect internally to use
> pymysql.
> 
> The one thing I don't know is if there are any valid reasons for someone
> wanting to use mysql over pymysql.
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 

The mysql library doesn't support python 3 and doesn't support eventlet, 
as far as I remember, which is why there was the push to adopt pymysql. 
But it's been years now so I can't remember exactly. I think Rackspace 
was still using the mysql backend for public cloud because of some 
straight to sql execution stuff they were doing for costly DB APIs [1] 
but I'd think that could be ported.

Anyway, +1 to dropping the mysql library and just rely on pymysql.

[1] https://review.openstack.org/#/c/243822/

-- 

Thanks,

Matt

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [neutron] multi-site forum discussion

2017-05-14 Thread Tim Bell

On 12 May 2017, at 23:38, joehuang 
> wrote:

Hello,

Neutron cells aware is not equal to multi-site. There are lots of multi-site 
deployment options, not limited to nova-cells, whether to use 
Neutron-cells/Nova-cells in multi-site deployments, it's up to cloud operator's 
choice. For the bug[3], it's reasonable to make neutron support cells, but it 
doesn't implicate that multi-site should mandatory adopt neutron-cells.

[3] https://bugs.launchpad.net/neutron/+bug/1690425


There are also a number of site limited deployments which use nova cells to 
support scalability within the site rather than only between sites. CERN has 
around 50 cells for the 2 data centre deployment we have.

There is also no need to guarantee a 1-to-1 mapping between nova cells and 
neutron cells. It may be simpler to do it that way but something based on the 
ML2 subnet would also seem a reasonable way to organise the neutron work while 
many sites use nova cells based on one hardware type per cell, for example.

Tim
Best Regards
Chaoyi Huang (joehuang)

From: Armando M. [arma...@gmail.com]
Sent: 13 May 2017 3:13
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] multi-site forum discussion



On 12 May 2017 at 11:47, Morales, Victor 
> wrote:
Armando,

I noticed that Tricircle is mentioned there.  Shouldn’t be better to extend its 
current functionality or what are the things that are missing there?

Tricircle aims at coordinating independent neutron systems that exist in 
separated openstack deployments. Making Neutron cell-aware will work in the 
context of the same openstack deployment.


Regards,
Victor Morales

From: "Armando M." >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Friday, May 12, 2017 at 1:06 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [neutron] multi-site forum discussion

Hi folks,

At the summit we had a discussion on how to deploy a single neutron system 
across multiple geographical sites [1]. You can find notes of the discussion on 
[2].

One key requirement that came from the discussion was to make Neutron more Nova 
cells friendly. I filed an RFE bug [3] so that we can move this forward on 
Lauchpad.

Please, do provide feedback in case I omitted some other key takeaway.

Thanks,
Armando

[1] 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18757/neutron-multi-site
[2] https://etherpad.openstack.org/p/pike-neutron-multi-site
[3] https://bugs.launchpad.net/neutron/+bug/1690425

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] - RFC cross project request id tracking

2017-05-14 Thread Tim Bell

> On 14 May 2017, at 13:04, Sean Dague  wrote:
> 
> One of the things that came up in a logging Forum session is how much effort 
> operators are having to put into reconstructing flows for things like server 
> boot when they go wrong, as every time we jump a service barrier the 
> request-id is reset to something new. The back and forth between Nova / 
> Neutron and Nova / Glance would be definitely well served by this. Especially 
> if this is something that's easy to query in elastic search.
> 
> The last time this came up, some people were concerned that trusting 
> request-id on the wire was concerning to them because it's coming from random 
> users. We're going to assume that's still a concern by some. However, since 
> the last time that came up, we've introduced the concept of "service users", 
> which are a set of higher priv services that we are using to wrap user 
> requests between services so that long running request chains (like image 
> snapshot). We trust these service users enough to keep on trucking even after 
> the user token has expired for this long run operations. We could use this 
> same trust path for request-id chaining.
> 
> So, the basic idea is, services will optionally take an inbound 
> X-OpenStack-Request-ID which will be strongly validated to the format 
> (req-$uuid). They will continue to always generate one as well. When the 
> context is built (which is typically about 3 more steps down the paste 
> pipeline), we'll check that the service user was involved, and if not, reset 
> the request_id to the local generated one. We'll log both the global and 
> local request ids. All of these changes happen in oslo.middleware, 
> oslo.context, oslo.log, and most projects won't need anything to get this 
> infrastructure.
> 
> The python clients, and callers, will then need to be augmented to pass the 
> request-id in on requests. Servers will effectively decide when they want to 
> opt into calling other services this way.
> 
> This only ends up logging the top line global request id as well as the last 
> leaf for each call. This does mean that full tree construction will take more 
> work if you are bouncing through 3 or more servers, but it's a step which I 
> think can be completed this cycle.
> 
> I've got some more detailed notes, but before going through the process of 
> putting this into an oslo spec I wanted more general feedback on it so that 
> any objections we didn't think about yet can be raised before going through 
> the detailed design.

This is very consistent with what I had understood during the forum session. 
Having a single request id across multiple services as the end user operation 
is performed would be a great help in operations, where we are often using a 
solution like ElasticSearch/Kibana to show logs and interactively query the 
timing and results of a given request id. It would also improve traceability 
during investigations where we are aiming to determine who the initial 
requesting user.

Tim

> 
>   -Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Deployment for production

2017-05-03 Thread Tim Bell
We use packstack when we need an all-in-one environment such as people wanting 
to try out some new ideas 
(http://clouddocs.web.cern.ch/clouddocs/advanced_topics/installing_your_own_openstack.html).

Puppet gives us lots of additional abilities to customise parts of the cloud 
for different purposes such as compute optimised. Details are at 
http://openstack-in-production.blogspot.fr

Managing 220K cores with packstack would be adventurous.

Tim

On 03.05.17, 18:25, "Satish Patel" <satish@gmail.com> wrote:

so at CERN you are not using packstack. You are using RDO and done
your puppet work to deploy stuff right?

I found packstack very easy and handy to deploy stuff, why people hate
it? what could go wrong if we deploy using packstack?

On Wed, May 3, 2017 at 12:18 PM, Tim Bell <tim.b...@cern.ch> wrote:
> You also need to assess the skills of your administration team, will they 
need help to set things up, consulting, formal support contract etc.
>
> At CERN, we’re running packages and puppet configuration derived from RDO 
in production.
>
> Tim
>
> On 03.05.17, 17:56, "Satish Patel" <satish@gmail.com> wrote:
>
> Problem is there are many tools available but hard to pick which one
> is reliable and provide long term community support, also we are
> looking something we can easily deploy compute node, upgrade software
> time to time without breaking any code etc.
>
> On Wed, May 3, 2017 at 5:41 AM, Christian Berendt
> <bere...@betacloud-solutions.de> wrote:
> > Hello Satish.
> >
> > You have to differentiate.
> >
> > You probably used the packages provided by the RDO project for 
CentOS/RedHat to deploy an OpenStack environment by hand (maybe using the 
official OpenStack install guide). This way is not recommended for a 
production. The packages provided itself are production ready, the way you have 
deployed them is not production ready.
> >
> > For a production you want to use one of the existing deployment 
frameworks or a distribution provided by a vendor (normally based on one of the 
existing deployment frameworks). Some of the deployment frameworks are able to 
use the packages provided by the RDO project.
> >
> > As a core member of the Kolla project I recommend to use Kolla 
(https://github.com/openstack/kolla). Our product is based on Kolla.
> >
> > There are other deployment frameworks as well: Fuel, OpenStack 
Ansible, OpenStack Chef, OpenStack Puppet, TripleO.
> >
> > The “best method” depends on the person you ask for the best method.
> >
> > If you need further details about Kolla drop me line line, I am 
happy to help you with this.
> >
> > Christian.
> >
> >> On 3. May 2017, at 08:48, Satish Patel <satish@gmail.com> 
wrote:
> >>
> >> We did POC on RDO and we are happy with product but now question 
is, should we use RDO for production deployment or other open source flavor 
available to deploy on prod. Not sure what is the best method of production 
deployment?
> >
> > --
> > Christian Berendt
> > Chief Executive Officer (CEO)
> >
> > Telefon: +49 711 21957003
> > Mobil: +49 171 5542175
> > Mail: bere...@betacloud-solutions.de
> > Web: https://www.betacloud-solutions.de
> >
> > Betacloud Solutions GmbH
> > Teckstrasse 62 / 70190 Stuttgart / Deutschland
> >
> > Geschäftsführer: Christian Berendt
> > Unternehmenssitz: Stuttgart
> > Amtsgericht: Stuttgart, HRB 756139
> >
>
> ___
> Mailing list: 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to : openstack@lists.openstack.org
> Unsubscribe : 
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Deployment for production

2017-05-03 Thread Tim Bell
You also need to assess the skills of your administration team, will they need 
help to set things up, consulting, formal support contract etc.

At CERN, we’re running packages and puppet configuration derived from RDO in 
production.

Tim

On 03.05.17, 17:56, "Satish Patel"  wrote:

Problem is there are many tools available but hard to pick which one
is reliable and provide long term community support, also we are
looking something we can easily deploy compute node, upgrade software
time to time without breaking any code etc.

On Wed, May 3, 2017 at 5:41 AM, Christian Berendt
 wrote:
> Hello Satish.
>
> You have to differentiate.
>
> You probably used the packages provided by the RDO project for 
CentOS/RedHat to deploy an OpenStack environment by hand (maybe using the 
official OpenStack install guide). This way is not recommended for a 
production. The packages provided itself are production ready, the way you have 
deployed them is not production ready.
>
> For a production you want to use one of the existing deployment 
frameworks or a distribution provided by a vendor (normally based on one of the 
existing deployment frameworks). Some of the deployment frameworks are able to 
use the packages provided by the RDO project.
>
> As a core member of the Kolla project I recommend to use Kolla 
(https://github.com/openstack/kolla). Our product is based on Kolla.
>
> There are other deployment frameworks as well: Fuel, OpenStack Ansible, 
OpenStack Chef, OpenStack Puppet, TripleO.
>
> The “best method” depends on the person you ask for the best method.
>
> If you need further details about Kolla drop me line line, I am happy to 
help you with this.
>
> Christian.
>
>> On 3. May 2017, at 08:48, Satish Patel  wrote:
>>
>> We did POC on RDO and we are happy with product but now question is, 
should we use RDO for production deployment or other open source flavor 
available to deploy on prod. Not sure what is the best method of production 
deployment?
>
> --
> Christian Berendt
> Chief Executive Officer (CEO)
>
> Telefon: +49 711 21957003
> Mobil: +49 171 5542175
> Mail: bere...@betacloud-solutions.de
> Web: https://www.betacloud-solutions.de
>
> Betacloud Solutions GmbH
> Teckstrasse 62 / 70190 Stuttgart / Deutschland
>
> Geschäftsführer: Christian Berendt
> Unternehmenssitz: Stuttgart
> Amtsgericht: Stuttgart, HRB 756139
>

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[Openstack-operators] [LDT] Scheduling of 'plan the week' at the summit

2017-04-27 Thread Tim Bell

The Large Deployment Team meeting for ‘Plan the Week’ 
(https://www.openstack.org/summit/boston-2017/summit-schedule/events/18404/large-deployment-team-planning-the-week)
 seems to be on Wednesday at 11h00 and the Recapping the week is the next slot 
at 11h50 
(https://www.openstack.org/summit/boston-2017/summit-schedule/events/18406/large-deployment-team-recapping-the-week)

Is this intended to have the two sessions so close together?

Tim
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [scientific][nova][cyborg] Special Hardware Forum session

2017-04-25 Thread Tim Bell
I think there will be quite a few ops folk… I can promise at least one ☺

Blair and I can also do a little publicity in 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18751/future-of-hypervisor-performance-tuning-and-benchmarking
 which is on Tuesday.

Tim

From: Rochelle Grober 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday 25 April 2017 19:11
To: Blair Bethwaite , 
"openstack-...@lists.openstack.org" , 
openstack-operators 
Cc: Matthew Riedemann , huangzhipeng 

Subject: Re: [openstack-dev] [scientific][nova][cyborg] Special Hardware Forum 
session


I know that some cyborg folks and nova folks are planning to be there. Now we 
need to drive some ops folks.


Sent from HUAWEI AnyOffice
From:Blair Bethwaite
To:openstack-...@lists.openstack.org,openstack-oper.
Date:2017-04-25 08:24:34
Subject:[openstack-dev] [scientific][nova][cyborg] Special Hardware Forum 
session

Hi all,

A quick FYI that this Forum session exists:
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18803/special-hardware
(https://etherpad.openstack.org/p/BOS-forum-special-hardware) is a
thing this Forum.

It would be great to see a good representation from both the Nova and
Cyborg dev teams, and also ops ready to share their experience and
use-cases.

--
Cheers,
~Blairo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [scientific][nova][cyborg] Special Hardware Forum session

2017-04-25 Thread Tim Bell
I think there will be quite a few ops folk… I can promise at least one ☺

Blair and I can also do a little publicity in 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18751/future-of-hypervisor-performance-tuning-and-benchmarking
 which is on Tuesday.

Tim

From: Rochelle Grober 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday 25 April 2017 19:11
To: Blair Bethwaite , 
"openstack-dev@lists.openstack.org" , 
openstack-operators 
Cc: Matthew Riedemann , huangzhipeng 

Subject: Re: [openstack-dev] [scientific][nova][cyborg] Special Hardware Forum 
session


I know that some cyborg folks and nova folks are planning to be there. Now we 
need to drive some ops folks.


Sent from HUAWEI AnyOffice
From:Blair Bethwaite
To:openstack-dev@lists.openstack.org,openstack-oper.
Date:2017-04-25 08:24:34
Subject:[openstack-dev] [scientific][nova][cyborg] Special Hardware Forum 
session

Hi all,

A quick FYI that this Forum session exists:
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18803/special-hardware
(https://etherpad.openstack.org/p/BOS-forum-special-hardware) is a
thing this Forum.

It would be great to see a good representation from both the Nova and
Cyborg dev teams, and also ops ready to share their experience and
use-cases.

--
Cheers,
~Blairo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Boston Forum Schedule Online

2017-04-13 Thread Tim Bell

Yes, I agree it is difficult… I was asking as there was an option ‘watch later’ 
on the summit schedule for the fishbowl sessions.

The option should probably be removed from the summit schedule web page so 
people don’t get disappointed later if that is not too complicated.

Tim

On 13.04.17, 09:48, "Thierry Carrez" <thie...@openstack.org> wrote:

Tim Bell wrote:
> Do you know if the Forum sessions will be video’d?

As far as I know they won't (same as old Design/Ops summit sessions).
It's difficult to produce, with people all over the room and not
necessarily using microphones.

The idea is to have the moderator post a follow-up thread for each
session, summarizing the outcome and opening up the discussion to
everyone who could not be present in person for one reason or another.

-- 
Thierry Carrez (ttx)

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Boston Forum Schedule Online

2017-04-13 Thread Tim Bell
Tom,

Do you know if the Forum sessions will be video’d?

Tim

On 13.04.17, 05:52, "Tom Fifield"  wrote:

Hello all,

The schedule for our the Forum is online:

https://www.openstack.org/summit/boston-2017/summit-schedule/#track=146


==> Session moderators, please start advertising your sessions & 
starting pre-discussions, to get the best, most well-informed people 
there possible!


==> Anyone else, please register your interest for sessions by 'ticking' 
them on the schedule. We use that information for room sizing.


Finally, thank you to the many who reviewed the draft. We fixed up those 
duplicates and made some slight changes to one or two sessions where 
there were conflicting talks. We're still working on contacting a couple 
of those marked as 'incomplete' in the tool, but they should be online 
shortly too.



Regards,


Doug, Emilien, Melvin, Mike, Shamail & Tom
Forum Scheduling Committee

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [neutron][sfc][fwaas][taas][horizon] where would we like to have horizon dashboard for neutron stadium projects?

2017-04-11 Thread Tim Bell
Are there any implications for the end user experience by going to different 
repos (such as requiring dedicated menu items)?

Tim

From: "Sridar Kandaswamy (skandasw)" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, 11 April 2017 at 17:01
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [neutron][sfc][fwaas][taas][horizon] where would 
we like to have horizon dashboard for neutron stadium projects?

Hi All:

From and FWaaS perspective – we also think (a)  would be ideal.

Thanks

Sridar

From: Kevin Benton >
Reply-To: OpenStack List 
>
Date: Monday, April 10, 2017 at 4:20 PM
To: OpenStack List 
>
Subject: Re: [openstack-dev] [neutron][sfc][fwaas][taas][horizon] where would 
we like to have horizon dashboard for neutron stadium projects?

I think 'a' is probably the way to go since we can mainly rely on existing 
horizon guides for creating new dashboard repos.

On Apr 10, 2017 08:11, "Akihiro Motoki" 
> wrote:
Hi neutrinos (and horizoners),

As the title says, where would we like to have horizon dashboard for
neutron stadium projects?
There are several projects under neutron stadium and they are trying
to add dashboard support.

I would like to raise this topic again. No dashboard support lands since then.
Also Horizon team would like to move in-tree neutron stadium dashboard
(VPNaaS and FWaaS v1 dashboard) to outside of horizon repo.

Possible approaches


Several possible options in my mind:
(a) dashboard repository per project
(b) dashboard code in individual project
(c) a single dashboard repository for all neutron stadium projects

Which one sounds better?

Pros and Cons


(a) dashboard repository per project
  example, networking-sfc-dashboard repository for networking-sfc
  Pros
   - Can use existing horizon related project convention and knowledge
 (directory structure, testing, translation support)
   - Not related to the neutron stadium inclusion. Each project can
provide its dashboard
 support regardless of neutron stadium inclusion.
 Cons
   - An additional repository is needed.

(b) dashboard code in individual project
  example, dashboard module for networking-sfc
  Pros:
   - No additional repository
   - Not related to the neutron stadium inclusion. Each project can
provide its dashboard
 support regardless of neutron stadium inclusion.
 Cons:
   - Requires extra efforts to support neutron and horizon codes in a
single repository
 for testing and translation supports. Each project needs to
explore the way.

(c) a single dashboard repository for all neutron stadium projects
   (something like neutron-advanced-dashboard)
  Pros:
- No additional repository per project
  Each project do not need a basic setup for dashboard and
possible makes things simple.
  Cons:
- Inclusion criteria depending on the neutron stadium inclusion/exclusion
  (Similar discussion happens as for neutronclient OSC plugin)
  Project before neutron stadium inclusion may need another implementation.


My vote is (a) or (c) (to avoid mixing neutron and dashboard codes in a repo).

Note that as dashboard supports for feature in the main neutron repository
are implemented in the horizon repository as we discussed several months ago.
As an example, trunk support is being development in the horizon repo.

Thanks,
Akihiro

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] openstack operators meetups team meeting 2017-4-11

2017-04-11 Thread Tim Bell
That looks great.

Do we have dates for the Ops meetup?

Tim

From: Chris Morgan 
Date: Tuesday, 11 April 2017 at 17:53
To: openstack-operators 
Subject: [Openstack-operators] openstack operators meetups team meeting 
2017-4-11

Today's meeting was very thinly attended (minutes and log below). I would like 
to encourage as many as possible openstack operators (particularly those on the 
meetups team) to make next meeting (2017-4-18 at 15:00 UTC) or failing that let 
it be know if this time slot is no longer working.

Next week I am going to propose we vote on accepting the proposal for the next 
mid-cycle meeting to be held in Mexico (details are here 
https://docs.google.com/document/d/1NdMCOTPP_ZmeF2Ak1mQB1bCOFHDkA5P2l6n6kdb8Kls/edit#)

Also we need to make some progress on the arrangements for the upcoming Boston 
Forum at the openstack summit, for example drumming up some more moderators. 
I'm going!

Cheers

Chris

Minutes:
Meeting ended Tue Apr 11 15:27:03 2017 UTC. Information about MeetBot at 
http://wiki.debian.org/MeetBot . (v 0.1.4)
11:27 AM Minutes: 
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2017/ops_meetup_team.2017-04-11-15.04.html
11:27 AM O Minutes (text): 
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2017/ops_meetup_team.2017-04-11-15.04.txt
11:27 AM Log: 
http://eavesdrop.openstack.org/meetings/ops_meetup_team/2017/ops_meetup_team.2017-04-11-15.04.log.html


--
Chris Morgan >
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [scientific] Resource reservation requirements (Blazar) - Forum session

2017-04-04 Thread Tim Bell
Some combination of spot/OPIE and Blazar would seem doable as long as the 
resource provider reserves capacity appropriately (i.e. spot resources>>blazar 
committed along with no non-spot requests for the same aggregate).

Is this feasible?

Tim

On 04.04.17, 19:21, "Jay Pipes"  wrote:

On 04/03/2017 06:07 PM, Blair Bethwaite wrote:
> Hi Jay,
>
> On 4 April 2017 at 00:20, Jay Pipes  wrote:
>> However, implementing the above in any useful fashion requires that 
Blazar
>> be placed *above* Nova and essentially that the cloud operator turns off
>> access to Nova's  POST /servers API call for regular users. Because if 
not,
>> the information that Blazar acts upon can be simply circumvented by any 
user
>> at any time.
>
> That's something of an oversimplification. A reservation system
> outside of Nova could manipulate Nova host-aggregates to "cordon off"
> infrastructure from on-demand access (I believe Blazar already uses
> this approach), and it's not much of a jump to imagine operators being
> able to twiddle the available reserved capacity in a finite cloud so
> that reserved capacity can be offered to the subset of users/projects
> that need (or perhaps have paid for) it.

Sure, I'm following you up until here.

> Such a reservation system would even be able to backfill capacity
> between reservations. At the end of the reservation the system
> cleans-up any remaining instances and preps for the next
> reservation.

By "backfill capacity between reservations", do you mean consume 
resources on the compute hosts that are "reserved" by this paying 
customer at some date in the future? i.e. Spot instances that can be 
killed off as necessary by the reservation system to free resources to 
meet its reservation schedule?

> The are a couple of problems with putting this outside of Nova though.
> The main issue is that pre-emptible/spot type instances can't be
> accommodated within the on-demand cloud capacity.

Correct. The reservation system needs complete control over a subset of 
resource providers to be used for these spot instances. It would be like 
a hotel reservation system being used for a motel where cars could 
simply pull up to a room with a vacant sign outside the door. The 
reservation system would never be able to work on accurate data unless 
some part of the motel's rooms were carved out for reservation system to 
use and cars to not pull up and take.

 >  You could have the
> reservation system implementing this feature, but that would then put
> other scheduling constraints on the cloud in order to be effective
> (e.g., there would need to be automation changing the size of the
> on-demand capacity so that the maximum pre-emptible capacity was
> always available). The other issue (admittedly minor, but still a
> consideration) is that it's another service - personally I'd love to
> see Nova support these advanced use-cases directly.

Welcome to the world of microservices. :)

-jay

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] FW: [quotas] Unified Limits Conceptual Spec RFC

2017-03-30 Thread Tim Bell

For those that are interested in nested quotas, there is proposal on how to 
address this forming in openstack-dev (and any comments on the review should be 
made to https://review.openstack.org/#/c/363765).

This proposal has the benefits (if I can summarise) that

- Quota limits will be centrally managed in Keystone so the quota data will be 
close to the project for creation/deletion/admin.
- The usage data remains within each project avoiding dependencies and risks of 
usage data getting out of sync.
- With a central store for quotas, there is increased opportunity for 
consistency. Given the complexity of quotas and nested projects, this would 
improve operator and end user understanding. The exact model is still for 
confirmation though.

We’ll have a forum discussion (http://forumtopics.openstack.org/cfp/details/9) 
in Boston too but feel free to give input to 
https://review.openstack.org/#/c/363765 so we can use Boston as the opportunity 
to agree on the approach and next steps.

Tim

On 30.03.17, 19:52, "Sean Dague"  wrote:

The near final draft of the unified limits spec is up now -
https://review.openstack.org/#/c/440815/

If you have not yet wandered in, now is the time, we're going to make
the final go / no go the end of this week.

-Sean

On 03/17/2017 06:36 AM, Sean Dague wrote:
> Background:
> 
> At the Atlanta PTG there was yet another attempt to get hierarchical
> quotas more generally addressed in OpenStack. A proposal was put forward
> that considered storing the limit information in Keystone
> (https://review.openstack.org/#/c/363765/). While there were some
> concerns on details that emerged out of that spec, the concept of the
> move to Keystone was actually really well received in that room by a
> wide range of parties, and it seemed to solve some interesting questions
> around project hierarchy validation. We were perilously close to having
> a path forward for a community request that's had a hard time making
> progress over the last couple of years.
> 
> Let's keep that flame alive!
> 
> 
> Here is the proposal for the Unified Limits in Keystone approach -
> https://review.openstack.org/#/c/440815/. It is intentionally a high
> level spec that largely lays out where the conceptual levels of control
> will be. It intentionally does not talk about specific quota models
> (there is a follow on that is doing some of that, under the assumption
> that the exact model(s) supported will take a while, and that the
> keystone interfaces are probably not going to substantially change based
> on model).
> 
> We're shooting for a 2 week comment cycle here to then decide if we can
> merge and move forward during this cycle or not. So please
> comment/question now (either in the spec or here on the mailing list).
> 
> It is especially important that we get feedback from teams that have
> limits implementations internally, as well as any that have started on
> hierarchical limits/quotas (which I believe Cinder is the only one).
> 
> Thanks for your time, and look forward to seeing comments on this.
> 
>   -Sean
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] Project Navigator Updates - Feedback Request

2017-03-24 Thread Tim Bell
Lauren,

Can we also update the sample configurations? We should certainly have Neutron 
now in the HTC (since nova-network deprecation)

Tim

From: Lauren Sell 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, 24 March 2017 at 17:57
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] Project Navigator Updates - Feedback Request

Hi everyone,

We’ve been talking for some time about updating the project navigator, and we 
have a draft ready to share for community feedback before we launch and 
publicize it. One of the big goals coming out of the joint TC/UC/Board meeting 
a few weeks ago[1] was to help better communicate ‘what is openstack?’ and this 
is one step in that direction.

A few goals in mind for the redesign:
- Represent all official, user-facing projects and deployment services in the 
navigator
- Better categorize the projects by function in a way that makes sense to 
prospective users (this may evolve over time as we work on mapping the 
OpenStack landscape)
- Help users understand which projects are mature and stable vs emerging
- Highlight popular project sets and sample configurations based on different 
use cases to help users get started

For a bit of context, we’re working to give each OpenStack official project a 
stronger platform as we think of OpenStack as a framework of composable 
infrastructure services that can be used individually or together as a powerful 
system. This includes the project mascots (so we in effect have logos to 
promote each component separately), updates to the project navigator, and 
bringing back the “project updates” track at the Summit to give each PTL/core 
team a chance to provide an update on their project roadmap (to be recorded and 
promoted in the project navigator among other places!).

We want your feedback on the project navigator v2 before it launches. Please 
take a look at the current version on the staging site and provide feedback on 
this thread.

http://devbranch.openstack.org/software/project-navigator/

Please review the overall concept and the data and description for your project 
specifically. The data is primarily pulled from TC tags[2] and Ops tags[3]. 
You’ll notice some projects have more information available than others for 
various reasons. That’s one reason we decided to downplay the maturity metric 
for now and the data on some pages is hidden. If you think your project is 
missing data, please check out the repositories and submit changes or again 
respond to this thread.

Also know this will continue to evolve and we are open to feedback. As I 
mentioned, a team that formed at the joint strategy session a few weeks ago is 
tackling how we map OpenStack projects, which may be reflected in the 
categories. And I suspect we’ll continue to build out additional tags and 
better data sources to be incorporated.

Thanks for your feedback and help.

Best,
Lauren

[1] 
http://superuser.openstack.org/articles/community-leadership-charts-course-openstack/
[2] https://governance.openstack.org/tc/reference/tags/
[3] https://wiki.openstack.org/wiki/Operations/Tags

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-22 Thread Tim Bell

> On 22 Mar 2017, at 00:53, Alex Schultz  wrote:
> 
> On Tue, Mar 21, 2017 at 5:35 PM, John Dickinson  wrote:
>> 
>> 
>> On 21 Mar 2017, at 15:34, Alex Schultz wrote:
>> 
>>> On Tue, Mar 21, 2017 at 3:45 PM, John Dickinson  wrote:
 I've been following this thread, but I must admit I seem to have missed 
 something.
 
 What problem is being solved by storing per-server service configuration 
 options in an external distributed CP system that is currently not 
 possible with the existing pattern of using local text files?
 
>>> 
>>> This effort is partially to help the path to containerization where we
>>> are delivering the service code via container but don't want to
>>> necessarily deliver the configuration in the same fashion.  It's about
>>> ease of configuration where moving service -> config files (on many
>>> hosts/containers) to service -> config via etcd (single source
>>> cluster).  It's also about an alternative to configuration management
>>> where today we have many tools handling the files in various ways
>>> (templates, from repo, via code providers) and trying to come to a
>>> more unified way of representing the configuration such that the end
>>> result is the same for every deployment tool.  All tools load configs
>>> into $place and services can be configured to talk to $place.  It
>>> should be noted that configuration files won't go away because many of
>>> the companion services still rely on them (rabbit/mysql/apache/etc) so
>>> we're really talking about services that currently use oslo.
>> 
>> Thanks for the explanation!
>> 
>> So in the future, you expect a node in a clustered OpenStack service to be 
>> deployed and run as a container, and then that node queries a centralized 
>> etcd (or other) k/v store to load config options. And other services running 
>> in the (container? cluster?) will load config from local text files managed 
>> in some other way.
> 
> No the goal is in the etcd mode, that it  may not be necessary to load
> the config files locally at all.  That being said there would still be
> support for having some configuration from a file and optionally
> provide a kv store as another config point.  'service --config-file
> /etc/service/service.conf --config-etcd proto://ip:port/slug'
> 
>> 
>> No wait. It's not the *services* that will load the config from a kv 
>> store--it's the config management system? So in the process of deploying a 
>> new container instance of a particular service, the deployment tool will 
>> pull the right values out of the kv system and inject those into the 
>> container, I'm guessing as a local text file that the service loads as 
>> normal?
>> 
> 
> No the thought is to have the services pull their configs from the kv
> store via oslo.config.  The point is hopefully to not require
> configuration files at all for containers.  The container would get
> where to pull it's configs from (ie. http://11.1.1.1:2730/magic/ or
> /etc/myconfigs/).  At that point it just becomes another place to load
> configurations from via oslo.config.  Configuration management comes
> in as a way to load the configs either as a file or into etcd.  Many
> operators (and deployment tools) are already using some form of
> configuration management so if we can integrate in a kv store output
> option, adoption becomes much easier than making everyone start from
> scratch.
> 
>> This means you could have some (OpenStack?) service for inventory management 
>> (like Karbor) that is seeding the kv store, the cloud infrastructure 
>> software itself is "cloud aware" and queries the central distributed kv 
>> system for the correct-right-now config options, and the cloud service 
>> itself gets all the benefits of dynamic scaling of available hardware 
>> resources. That's pretty cool. Add hardware to the inventory, the cloud 
>> infra itself expands to make it available. Hardware fails, and the cloud 
>> infra resizes to adjust. Apps running on the infra keep doing their thing 
>> consuming the resources. It's clouds all the way down :-)
>> 
>> Despite sounding pretty interesting, it also sounds like a lot of extra 
>> complexity. Maybe it's worth it. I don't know.
>> 
> 
> Yea there's extra complexity at least in the
> deployment/management/monitoring of the new service or maybe not.
> Keeping configuration files synced across 1000s of nodes (or
> containers) can be just as hard however.
> 

Would there be a mechanism to stage configuration changes (such as a 
QA/production environment) or have different configurations for different 
hypervisors?

We have some of our hypervisors set for high performance which needs a slightly 
different nova.conf (such as CPU passthrough).

Tim

>> Thanks again for the explanation.
>> 
>> 
>> --John
>> 
>> 
>> 
>> 
>>> 
>>> Thanks,
>>> -Alex
>>> 
 
 --John
 
 
 
 
 On 21 Mar 2017, at 14:26, Davanum Srinivas wrote:
 
> 

Re: [Openstack] nova-network -> neutron migration docs and stories?

2017-03-18 Thread Tim Bell
Ricardo from CERN gave a talk in Barcelona about our experiences. 
https://www.youtube.com/watch?v=54wp1yzC-d8

eBay was one of the first to migrate - 
http://superuser.openstack.org/articles/ebay-in-production-migration-from-nova-network-to-neutron/

Tim

From: joe 
Date: Friday, 17 March 2017 at 22:52
To: Andrew Bogott 
Cc: "openstack@lists.openstack.org" 
Subject: Re: [Openstack] nova-network -> neutron migration docs and stories?

Hi Andrew,

NeCTAR published a suite of scripts for doing a nova-network to neutron 
migration: https://github.com/NeCTAR-RC/novanet2neutron

IIRC, another organization reported success with these scripts a few months ago 
on the openstack-operators list.

I'm currently doing some trial runs and all looks good. I had to make some 
slight modifications to account for IPv6 and floating IPs, but the scripts are 
very simple and readable, so it was easy to do. I'll probably post those 
modifications to Github in the next week or two.

We'll be doing the actual migration in May.

Hope that helps,
Joe


On Fri, Mar 17, 2017 at 2:18 PM, Andrew Bogott 
> wrote:
Googling for nova-network migration advice gets me a lot of hits but many 
are fragmentary and/or incomplete[1][2]  I know that lots of people have gone 
through this process, though, and that there are probably as many different 
solutions as there are migration stories.

So:  If you have done this migration, please send me links! Blog posts, 
docpages that you found useful, whatever you have to offer.  We have lots of 
ideas about how to move forward, but it's always nice to not repeat other 
people's mistakes.  We're running Liberty with flat dhcp and floating IPs.

Thanks!

-Andrew

[1] 
https://wiki.openstack.org/wiki/Neutron/MigrationFromNovaNetwork/HowTo#How_to_test_migration_process
 ' TODO - fill in the migration process script here'

[2] 
https://www.slideshare.net/julienlim/openstack-nova-network-to-neutron-migration-survey-results
 Slide 4: 'Develop tools to facilitate migration.'  Did they?

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : 
openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [keystone][all] Reseller - do we need it?

2017-03-17 Thread Tim Bell
Lance,

I had understood that the resellers was about having users/groups at the 
different points in the tree.

I think the basic resource management is being looked at as part of the nested 
quotas functionality. For CERN, we’d look to delegate the quota and roles 
management but not support sub-tree user/groups.

Tim

From: Lance Bragstad 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, 17 March 2017 at 00:23
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [keystone][all] Reseller - do we need it?


On Thu, Mar 16, 2017 at 5:54 PM, Fox, Kevin M 
> wrote:
At our site, we have some larger projects that would be really nice if we could 
just give a main project all the resources they need, and let them suballocate 
it as their own internal subprojects needs change. Right now, we have to deal 
with all the subprojects directly. The reseller concept may fit this use case?

Sounds like this might also be solved by better RBAC that allows real project 
administrators to control their own subtrees. Is there a use case to limit 
visibility either up or down the tree? If not, would it be a nice-to-have?


Thanks,
Kevin

From: Lance Bragstad [lbrags...@gmail.com]
Sent: Thursday, March 16, 2017 2:10 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [keystone][all] Reseller - do we need it?
Hey folks,

The reseller use case [0] has been popping up frequently in various discussions 
[1], including unified limits.

For those who are unfamiliar with the reseller concept, it came out of early 
discussions regarding hierarchical multi-tenancy (HMT). It essentially allows a 
certain level of opaqueness within project trees. This opaqueness would make it 
easier for providers to "resell" infrastructure, without having 
customers/providers see all the way up and down the project tree, hence it was 
termed reseller. Keystone originally had some ideas of how to implement this 
after the HMT implementation laid the ground work, but it was never finished.

With it popping back up in conversations, I'm looking for folks who are willing 
to represent the idea. Participating in this thread doesn't mean you're on the 
hook for implementing it or anything like that.

Are you interested in reseller and willing to provide use-cases?



[0] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/mitaka/reseller.html#problem-description

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-12 Thread Tim Bell

> On 11 Mar 2017, at 08:19, Clint Byrum  wrote:
> 
> Excerpts from Christopher Aedo's message of 2017-03-10 19:30:18 -0800:
>> On Fri, Mar 10, 2017 at 6:20 PM, Clint Byrum  wrote:
>>> Excerpts from Fox, Kevin M's message of 2017-03-10 23:45:06 +:
 So, this is the kind of thinking I'm talking about... OpenStack today is
 more then just IaaS in the tent. Trove (DBaaS), Sahara (Hadoop,Spark,etc
 aaS), Zaqar (Messaging aaS) and many more services. But they seem to be
 treated as second class citizens, as they are not "IaaS".
 
>>> 
>>> It's not that they're second class citizens. It's that their community
>>> is smaller by count of users, operators, and developers. This should not
>>> come as a surprise, because the lowest common denominator in any user
>>> base will always receive more attention.
>>> 
> Why should it strive to be anything except an excellent building block
 for other technologies?
 
 You misinterpret my statement. I'm in full agreement with you. The
 above services should be excellent building blocks too, but are suffering
 from lack of support from the IaaS layer. They deserve the ability to
 be excellent too, but need support/vision from the greater community
 that hasn't been forthcoming.
 
>>> 
>>> You say it like there's some over arching plan to suppress parts of the
>>> community and there's a pack of disgruntled developers who just can't
>>> seem to get OpenStack to work for Trove/Sahara/AppCatalog/etc.
>>> 
>>> We all have different reasons for contributing in the way we do.  Clearly,
>>> not as many people contribute to the Trove story as do the pure VM-on-nova
>>> story.
>>> 
 I agree with you, we should embrace the container folks and not treat
 them as separate. I think thats critical if we want to allow things
 like Sahara or Trove to really fulfil their potential. This is the path
 towards being an OpenSource AWS competitor, not just for being able to
 request vm's in a cloudy way.
 
 I think that looks something like:
 OpenStack Advanced Service (trove, sahara, etc) -> Kubernetes ->
 Nova VM or Ironic Bare Metal.
 
>>> 
>>> That's a great idea. However, AFAICT, Nova is _not_ standing in Trove,
>>> Sahara, or anyone else's way from doing this. Seriously, try it. I'm sure
>>> it will work.  And in so doing, you will undoubtedly run into friction
>>> from the APIs. But unless you can describe that _now_, you have to go try
>>> it and tell us what broke first. And then you can likely submit feature
>>> work to nova/neutron/cinder to make it better. I don't see anything in
>>> the current trajectory of OpenStack that makes this hard. Why not just do
>>> it? The way you ask, it's like you have a team of developers just sitting
>>> around shaving yaks waiting for an important OpenStack development task.
>>> 
>>> The real question is why aren't Murano, Trove and Sahara in most current
>>> deployments? My guess is that it's because most of our current users
>>> don't feel they need it. Until they do, Trove and Sahara will not be
>>> priorities. If you want them to be priorities _pay somebody to make them
>>> a priority_.
>> 
>> This particular point really caught my attention.  You imply that
>> these additional services are not widely deployed because _users_
>> don't want them.  The fact is most users are completely unaware of
>> them because these services require the operator of the cloud to
>> support them.  In fact they often require the operator of the cloud to
>> support them from the initial deployment, as these services (and
>> *most* OpenStack services) are frighteningly difficult to add to an
>> already deployed cloud without downtime and high risk of associated
>> issues.
>> 
>> I think it's unfair to claim these services are unpopular because
>> users aren't asking for them when it's likely users aren't even aware
>> of them (do OVH, Vexxhost, Dreamhost, Raskspace or others provide a
>> user-facing list of potential OpenStack services with a voting option?
>> Not that I've ever seen!)
>> 
>> I bring this up to point out how much more popular ALL of these
>> services would be if the _users_ were able to enable them without
>> requiring operator intervention and support.
>> 
>> Based on our current architecture, it's nearly impossible for a new
>> project to be deployed on a cloud without cloud-level admin
>> privileges.  Additionally almost none of the projects could even work
>> this way (with Rally being a notable exception).  I guess I'm kicking
>> this dead horse because for a long time I've argued we need to back
>> away from the tightly coupled nature of all the projects, but
>> (speaking of horses) it seems that horse is already out of the barn.
>> (I really wish I could work in one more proverb dealing with horses
>> but it's getting late on a Friday so I'll stop now.)
>> 
> 
> I see your point, and believe it is valid.
> 
> 

Re: [Openstack-operators] RFC - hierarchical quota models

2017-03-08 Thread Tim Bell

> On 7 Mar 2017, at 11:52, Sean Dague  wrote:
> 
> One of the things that came out of the PTG was perhaps a new path
> forward on hierarchical limits that involves storing of limits in
> keystone doing counting on the projects. Members of the developer
> community are working through all that right now, that's not really what
> this is about.
> 
> As a related issue, it seemed that every time that we talk about this
> space, people jump into describing how they think the counting /
> enforcement would work. It became clear that people were overusing the
> word "overbooking" to the point where it didn't have a lot of meaning.
> 
> https://review.openstack.org/#/c/441203/ is a reference document that I
> started in writing out every model I thought I heard people talk about,
> the rules with it, and starting to walk through the kind of algorithm
> needed to update limits, as well as check quota on ones that seem like
> we might move forward with.
> 
> It is full of blockdiag markup, which makes the rendered HTML the best
> way to view it -
> http://docs-draft.openstack.org/03/441203/11/check/gate-keystone-specs-docs-ubuntu-xenial/c3fc2b3//doc/build/html/specs/keystone/backlog/hierarchical-quota-scenarios.html
> 
> 
> There are specific question to the operator community here:
> 
> 
> Are there other models you believe are not represented that you think
> should be considered? if so, what are the rules of them so I can throw
> them into the document.
> 

Thanks.  In the interest of completeness, I’ll add one more scenario to the mix 
but I would not look for this as part of the functionality of the 1st release.

One item we have encountered in the past is how to reduce quota for projects. 
If a child project quota is to be reduced but it is running the maximum number 
of VMs, the parent project administrator has to wait for the child to do the 
deletion before they can reduce the quota. Being able to do this would mean 
that new resource creation would be blocked but that existing resources would 
continue to run (until the child project admin gets round to choosing the 
priorities for deletion out of the many VMs he has running)

However, this does bring in significant additional complexity so unless there 
is an easy way of modelling it, I’d suggest this for nested quotes v2 at the 
earliest.

Tim

> Would love to try to model everything under consideration here. It seems
> like the conversations go around in circles a bit because everyone is
> trying to keep everything in working memory, and paging out parts.
> Diagrams hopefully ensure we all are talking about the same things.
> 
>   -Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-20 Thread Tim Bell
Is there cloud-init support for this mode or do we still need to mount as a 
config drive?

Tim

On 20.02.17, 17:50, "Jeremy Stanley"  wrote:

On 2017-02-20 15:46:43 + (+), Daniel P. Berrange wrote:
> The data is exposed either as a block device or as a character device
> in Linux - which one depends on how the NVDIMM is configured. Once
> opening the right device you can simply mmap() the FD and read the
> data. So exposing it as a file under sysfs doesn't really buy you
> anything better.

Oh! Fair enough, if you can already access it as a character device
then I agree that solves the use cases I was considering.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-16 Thread Tim Bell

On 16 Feb 2017, at 19:42, Fox, Kevin M 
> wrote:

+1. The assumption was market forces will cause the best OpenStack deployment 
tools to win. But the sad reality is, market forces are causing people to look 
for non OpenStack solutions instead as the pain is still too high.

While k8s has a few different deployment tools currently, they are focused on 
getting the small bit of underlying plumbing deployed. Then you use the common 
k8s itself to deploy the rest. Adding a dashboard, dns, ingress, sdn, other 
component is easy in that world.

IMO, OpenStack needs to do something similar. Standardize a small core and get 
that easily deployable, then make it easy to deploy/upgrade the rest of the big 
tent projects on top of that, not next to it as currently is being done.

Thanks,
Kevin

Unfortunately, the more operators and end users question the viability of a 
specific project, the less likely it is to be adopted.
It is a very very difficult discussion with an end user to explain that 
function X is no longer available because the latest OpenStack upgrade had to 
be done for security/functional/stability reasons and this project/function is 
not available.
The availability of a function may also have been one of the positives for the 
OpenStack selection so finding a release or two later that it is no longer in 
the portfolio is difficult.
The deprecation policy really helps so we can give a good notice but this 
assumes an equivalent function is available. For example, the built in Nova EC2 
to EC2 project was an example where we had enough notice to test the new 
solution in parallel and then move with minimum disruption.  Moving an entire 
data centre from Chef to Puppet or running a parallel toolchain, for example, 
has a high cost.
Given the massive functionality increase in other clouds, It will be tough to 
limit the OpenStack offering to the small core. However, expanding with 
unsustainable projects is also not attractive.
Tim


From: Joshua Harlow [harlo...@fastmail.com]
Sent: Thursday, February 16, 2017 10:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [chef] Making the Kitchen Great Again: A 
Retrospective on OpenStack & Chef

Alex Schultz wrote:
On Thu, Feb 16, 2017 at 9:12 AM, Ed 
Leafe>  wrote:
On Feb 16, 2017, at 10:07 AM, Doug 
Hellmann>  wrote:

When we signed off on the Big Tent changes we said competition
between projects was desirable, and that deployers and contributors
would make choices based on the work being done in those competing
projects. Basically, the market would decide on the "optimal"
solution. It's a hard message to hear, but that seems to be what
is happening.
This.

We got much better at adding new things to OpenStack. We need to get better at 
letting go of old things.

-- Ed Leafe




I agree that the market will dictate what continues to survive, but if
you're not careful you may be speeding up the decline as the end user
(deployer/operator/cloud consumer) will switch completely to something
else because it becomes to difficult to continue to consume via what
used to be there and no longer is.  I thought the whole point was to
not have vendor lock-in.  Honestly I think the focus is too much on
the development and not enough on the consumption of the development
output.  What are the point of all these features if no one can
actually consume them.


+1 to that.

I've been in the boat of development and consumption of it for my
*whole* journey in openstack land and I can say the product as a whole
seems 'underbaked' with regards to the way people consume the
development output. It seems we have focused on how to do the dev. stuff
nicely and a nice process there, but sort of forgotten about all that
being quite useless if no one can consume them (without going through
much pain or paying a vendor).

This has or has IMHO been a factor in why certain are companies (and the
people they support) are exiting openstack and just going elsewhere.

I personally don't believe fixing this is 'let the market forces' figure
it out for us (what a slow & horrible way to let this play out; I'd
almost rather go pull my fingernails out). I do believe it will require
making opinionated decisions which we have all never been very good at.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] Hierarchical quotas at the PTG?

2017-02-12 Thread Tim Bell

On 12 Feb 2017, at 12:13, Boris Bobrov 
> wrote:

I would like to talk about it too.

On 02/10/2017 11:56 PM, Matt Riedemann wrote:
Operators want hierarchical quotas [1]. Nova doesn't have them yet and
we've been hesitant to invest scarce developer resources in them since
we've heard that the implementation for hierarchical quotas in Cinder
has some issues. But it's unclear to some (at least me) what those
issues are.

I don't know what the actual issue is, but from from keystone POV
the issue is that it basically replicates project tree that is stored
in keystone. On top of usual replication issues, there is another one --
it requires too many permissions. Basically, it requires service user
to be cloud admin.

I have not closely followed the cinder implementation since the CERN and BARC 
Mumbai focus has more around Nova.

The various feedbacks I have had was regarding how to handle overcommit on the 
cinder proposal. A significant share of the operator community would like to 
allow

- No overcommit for the ‘top level’ project (i.e. you can’t use more than you 
are allocated)]
- Sub project over commit is OK (i.e. promising your sub projects more is OK, 
sum of the commitment to subprojects>project is OK but should be given an error 
if it actually happens)



Has anyone already planned on talking about hierarchical quotas at the
PTG, like the architecture work group?

I know there was a bunch of razzle dazzle before the Austin summit about
quotas, but I have no idea what any of that led to. Is there still a
group working on that and can provide some guidance here?

In my opinion, projects should not re-implements quotas every time.
I would like to have a common library for enforcing quotas (usages)
and a service for storing quotas (limits). We should also think of a
way to transfer necessary projects subtree from keystone to quota
enforcer.

We could store quota limits in keystone and distribute it in token
body, for example. Here is a POC that we did some time ago --
https://review.openstack.org/#/c/403588/ and
https://review.openstack.org/#/c/391072/
But it still has the issue with permissions.


There has been an extended discussion since the Boson proposal at the Hong Kong 
summit on how to handle quotas, where a full quota service was proposed.

A number of ideas have emerged since then

- Quota limits stored in Keystone with the project data
- An oslo library to support checking that a resource request would be OK

One Forum session at the summit is due to be on this topic.

Some of the academic use cases are described in 
https://openstack-in-production.blogspot.fr/2016/04/resource-management-at-cern.html
 but commercial reseller models are valid here where

- company A has valuable resources to re-sell (e.g. flood risk and associated 
models)
- company B signs an agreement with Company A (e.g. an insurance company wants 
to use flood risk data as factor in their cost models)

The natural way of delivering this is that ‘A’ gives a pricing model based on 
‘B’’s consumption of compute and storage resources.

Tim



[1]
http://lists.openstack.org/pipermail/openstack-operators/2017-January/012450.html



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][glance][glare][all] glance/glare/artifacts/images at the PTG

2017-02-12 Thread Tim Bell
Although there has not been much discussion on this point on the mailing list, 
I feel we do need to find the right level of granularity for ‘mainstream’ 
projects:

For CERN, we look for the following before offering a project to our end users:

- Distro packaging (in our case RPMs through RDO)
- Puppet modules
- Openstack client support (which brings Kerberos/X.509 authentication)
- Install, admin and user docs
- Project diversity for long term sustainability

We have many use cases of ‘resellers’ where one project provides a deliverable 
for others to consume, some degree of community image sharing is arriving and 
these are the same problems to face for artefacts and application catalogues 
(such as Heat and Magnum).

For me, which project provides this for images and/or artefacts is a choice for 
the technical community but consistent semantics would be greatly appreciated 
for those discussions with our end users such as “I need a Heat template for X 
but this needs community image Y and the visibility rules means that one needs 
to be shared in advance, the other I need to subscribe to” are difficult 
discussion which discourages uptake.

A cloud user should be able to click on community offered ‘R-as-a-Service’ in 
the application catalog GUI, and that’s all.

Tim

On 10.02.17, 18:39, "Brian Rosmaita"  wrote:

I want to give all interested parties a heads up that I have scheduled a
session in the Macon room from 9:30-10:30 a.m. on Thursday morning
(February 23).

Here's what we need to discuss.  This is from my perspective as Glance
PTL, so it's going to be Glance-centric.  This is a quick narrative
description; please go to the session etherpad [0] to turn this into a
specific set of discussion items.

Glance is the OpenStack image cataloging and delivery service.  A few
cycles ago (Juno?), someone noticed that maybe Glance could be
generalized so that instead of storing image metadata and image data,
Glance could store arbitrary digital "stuff" along with metadata
describing the "stuff".  Some people (like me) thought that this was an
obvious direction for Glance to take, but others (maybe wiser, cooler
heads) thought that Glance needed to focus on image cataloging and
delivery and make sure it did a good job at that.  Anyway, the Glance
mission statement was changed to include artifacts, but the Glance
community never embraced them 100%, and in Newton, Glare split off as
its own project (which made sense to me, there was too much unclarity in
Glance about how Glare fit in, and we were holding back development, and
besides we needed to focus on images), and the Glance mission statement
was re-amended specifically to exclude artifacts and focus on images and
metadata definitions.

OK, so the current situation is:
- Glance "does" image cataloging and delivery and metadefs, and that's
all it does.
- Glare is an artifacts service (cataloging and delivery) that can also
handle images.

You can see that there's quite a bit of overlap.  I gave you the history
earlier because we did try to work as a single project, but it did not
work out.

So, now we are in 2017.  The OpenStack development situation has been
fragile since the second half of 2016, with several big OpenStack
sponsors pulling way back on the amount of development resources being
contributed to the community.  This has left Glare in the position where
it cannot qualify as a Bit Tent project, even though there is interest
in artifacts.

Mike Fedosin, the PTL for Glare, has asked me about Glare becoming part
of the Glance project again.  I will be completely honest, I am inclined
to say "no".  I have enough problems just getting Glance stuff done (for
example, image import missed Ocata).  But in addition to doing what's
right for Glance, I want to do what's right for OpenStack.  And I look
at the overlap and think ...

Well, what I think is that I don't want to go through the Juno-Newton
cycles of argument again.  And we have to do what is right for our users.

The point of this session is to discuss:
- What does the Glance community see as the future of Glance?
- What does the wider OpenStack community (TC) see as the future of Glance?
- Maybe, more importantly, what does the wider community see as the
obligations of Glance?
- Does Glare fit into this vision?
- What kind of community support is there for Glare?

My reading of Glance history is that while some people were on board
with artifacts as the future of Glance, there was not a sufficient
critical mass of the Glance community that endorsed this direction and
that's why things unravelled in Newton.  I don't want to see that happen
again.  Further, I don't think the Glance community got the word out to
 

Re: [Openstack-operators] [User-committee] [openstack-dev] Large Contributing OpenStack Operators working group?

2017-02-03 Thread Tim Bell
+1 for the WG summary and sharing priorities.

Equally, exploring how we can make use of common collaboration tools for all WG 
would be beneficial. 

There is much work to do to get the needs translated to code/doc/tools and it 
would be a pity if we are not sharing to the full across WGs due to different 
technology choices.

Tim

On 03.02.17, 19:16, "Jonathan Proulx"  wrote:

On Fri, Feb 03, 2017 at 04:34:20PM +0100, lebre.adr...@free.fr wrote:
:Hi, 
:
:I don't know whether there is already a concrete/effective way to identify 
overlapping between WGs. 
:But if not, one way can be to arrange one general session in each summit 
where all WG chairs could come and discuss about major actions that have been 
done for the past cycle and what are the plans for the next one.


That's a really good idea.  I think that woudl be a good use of the UC
Forum session.  In the past those had mostly been about what is the UC
and how shoudl it be structured going forward.  With recent by laws
change and upcoming ellection that's pretty settled.

Having a (very) brief report back from working groups and teams
followed by cross group discussion could be a valuable way forward for
that session IMO.

-Jon

:
:Being involved in several WGs allows us to identify collaboration 
opportunities (done for instance between the NFV and Massively Distributed 
WGs/Teams during this cycle) but to be honest it is costly and sometimes not 
still feasible to be involved in every action. 
:Offering the opportunity to get an up-to-date overview every 6 months can 
be valuable for all of us. 
:
:My two cents, 
:ad_rien_
:
:- Mail original -
:> De: "Jay Pipes" 
:> À: "Yih Leong Sun" , "Edgar Magana" 
,
:> openstack-operators@lists.openstack.org, 
user-commit...@lists.openstack.org
:> Cc: "JAMEY A MCCABE" , "ANDREW UKASICK" 
:> Envoyé: Vendredi 3 Février 2017 16:14:26
:> Objet: Re: [User-committee] [Openstack-operators] [openstack-dev] Large 
Contributing OpenStack Operators working
:> group?
:> 
:> Leong, thanks so much for responding. Comments/followup questions
:> inline.
:> 
:> On 02/02/2017 09:07 PM, Sun, Yih Leong wrote:
:> > LCOO was initiated by a group of large telco who contributes/uses
:> > OpenStack, such as AT, NTT, Reliance Jio, Orange, etc [1].
:> 
:> ack.
:> 
:> > The co-chair has reached out to Product WG for collaboration (refer
:> > IRC meeting logs). The team is working on plans to
:> > structure/define LCOO use cases.
:> 
:> My question here is what makes the LCOO use cases different from,
:> say,
:> the Telco Operator working group's use cases? Or the Massively
:> Distributed working group's use cases? Or the Enterprise working
:> group's
:> use cases?
:> 
:> Is the difference that the LCOO's use cases are stories that are
:> important for the LCOO member companies?
:> 
:> > Use case requirements (while still work-in-progress) can span
:> > across multiple areas which might/might-not covered by existing
:> > Team/WG.
:> 
:> Understood. Is the plan of the LCOO to identify use cases that are
:> covered by other working groups, contribute resources to develop that
:> use case, but have that existing working group handle the product
:> management (spec/blueprint/communication/roadmap) stuff?
:> 
:> > I'm sure LCOO will reach out to various projects for collaboration,
:> > stay tuned...
:> 
:> My questions seem to have been taken as an attack on the LCOO. I was
:> hoping to avoid that. I'm sincerely hoping to see the outreach to
:> various projects and am eager to collaborate with developers and
:> operators from the LCOO companies. I'm just confused what the
:> relationship between the LCOO and the existing working groups is.
:> 
:> Best,
:> -jay
:> 
:> > [1] https://etherpad.openstack.org/p/LCOO_Participants
:> >
:> > Thanks!
:> >
:> > ---
:> > Yih Leong Sun, PhD
:> > Senior Software Cloud Architect | Open Source Technology Center |
:> > Intel Corporation
:> > yih.leong@intel.com | +1 503 264 0610
:> >
:> >
:> > -Original Message-
:> > From: Jay Pipes [mailto:jaypi...@gmail.com]
:> > Sent: Thursday, February 2, 2017 5:23 PM
:> > To: Edgar Magana ;
:> > openstack-operators@lists.openstack.org;
:> > user-commit...@lists.openstack.org
:> > Cc: MCCABE, JAMEY A ; UKASICK, ANDREW
:> > 
:> > Subject: Re: [Openstack-operators] [openstack-dev] Large
:> > Contributing OpenStack Operators working group?
:> >
:> > On 02/02/2017 05:02 PM, 

Re: [Openstack] [openstack-community] User Survey - Deadline Feb 20th

2017-02-01 Thread Tim Bell
I’m getting a too many redirect error on the user survey link.

Tim

On 01.02.17, 20:21, "Tom Fifield"  wrote:

Hi all,

If you run OpenStack, please take a few minutes to respond to the latest 
User Survey or pass it along to your friends.

This is the ninth survey, but if you're new what you can expect are:
* a few basic questions—how and why do you work with OpenStack, what do 
you like about it, and where can we improve? (5-10mins)
* a more detailed set of questions that helps us understand your cloud 
so we can provide feedback to development teams (10mins)


The deadline to fill out the survey is February 20 at 23:59 UTC. Start 
the survey now:

https://www.openstack.org/user-survey

If you answered the survey last time, you won't need to start from 
scratch. Just log back in to update your deployment profile.


All of the information you provide is confidential to the Foundation and 
User Committee and will be aggregated anonymously unless you 
specifically allow us to make it public. We’ll share the survey findings 
first with those who fill out the user survey, and with the community at 
next OpenStack Summit, May 8-11 in Boston 
https://www.openstack.org/summit/boston-2017/



Thanks!

-Tom

___
Community mailing list
commun...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/community


___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack-operators] [openstack-community] User Survey - Deadline Feb 20th

2017-02-01 Thread Tim Bell
I’m getting a too many redirect error on the user survey link.

Tim

On 01.02.17, 20:21, "Tom Fifield"  wrote:

Hi all,

If you run OpenStack, please take a few minutes to respond to the latest 
User Survey or pass it along to your friends.

This is the ninth survey, but if you're new what you can expect are:
* a few basic questions—how and why do you work with OpenStack, what do 
you like about it, and where can we improve? (5-10mins)
* a more detailed set of questions that helps us understand your cloud 
so we can provide feedback to development teams (10mins)


The deadline to fill out the survey is February 20 at 23:59 UTC. Start 
the survey now:

https://www.openstack.org/user-survey

If you answered the survey last time, you won't need to start from 
scratch. Just log back in to update your deployment profile.


All of the information you provide is confidential to the Foundation and 
User Committee and will be aggregated anonymously unless you 
specifically allow us to make it public. We’ll share the survey findings 
first with those who fill out the user survey, and with the community at 
next OpenStack Summit, May 8-11 in Boston 
https://www.openstack.org/summit/boston-2017/



Thanks!

-Tom

___
Community mailing list
commun...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/community


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Delegating quota management for all projects to a user without the admin role?

2017-01-27 Thread Tim Bell

I think this would merit a bug report to cinder (and probably Manila also). 
This would also help if someone else is searching. The fix may well take some 
time though.

Tim

From: "Edmund Rhudy (BLOOMBERG/ 120 PARK)" <erh...@bloomberg.net>
Reply-To: Edmund Rhudy <erh...@bloomberg.net>
Date: Friday, 27 January 2017 at 16:49
To: openstack-operators <openstack-operators@lists.openstack.org>, Tim Bell 
<tim.b...@cern.ch>
Subject: Re: [Openstack-operators] Delegating quota management for all projects 
to a user without the admin role?

I did some deep excavation and found out that Cinder is specifically the 
problem here. With "openstack quota show", it contacts both Nova and Cinder for 
quota information. Nova returns successfully, Cinder does not, so the whole 
command fails. Nova policy allows structuring things so that an individual user 
can manage quota for other users. Cinder, however, is rife with hardcoded 
checks for admin privileges at the top-level API. To make things even better, 
there's a second layer of hardcoded checks below that on the SQLAlchemy API 
that run the same admin privilege checks _again_.

With appropriate policy overrides set, I can see and manage Nova quotas just 
fine via novaclient (ignoring Cinder). Unfortunately, we need to be able to 
manage volume quotas too, so I'll have to find a different approach (either 
that or locally patch some pretty big chunks of Cinder code, which strangely 
enough I'd like to avoid).

From: tim.b...@cern.ch
Subject: Re: [Openstack-operators] Delegating quota management for all projects 
to a user without the admin role?
I think you could do something with policy.json to define which operations a 
given role would have access to. We have used this to provide the centre 
operator with abilities such as stop/start. The technical details are described 
at https://openstack-in-production.blogspot.fr/2015/02/delegation-of-roles.html.

Tim

From: "Edmund Rhudy (BLOOMBERG/ 120 PARK)" <erh...@bloomberg.net>
Reply-To: Edmund Rhudy <erh...@bloomberg.net>
Date: Friday, 27 January 2017 at 00:36
To: openstack-operators <openstack-operators@lists.openstack.org>
Subject: [Openstack-operators] Delegating quota management for all projects to 
a user without the admin role?

I'm looking for a way to allow a user that does not have the admin role to be 
able to view and set quota (both Nova/Cinder) for all projects in an OpenStack 
cluster. For us, the boundary of a Keystone region is coterminous with an 
OpenStack cluster - we don't currently use any sort of federated Keystone.

Background: we are involved in a project (not the Keystone variety) for 
integrating Bloomberg's internal budget processes closely with purchasing 
compute resources. The idea of this system is that you will purchase some 
number of standardized compute units and then you can allocate them to projects 
in various OpenStack clusters as you wish. In order to do this, the tool needs 
to be able to see what Keystone projects you have access to, see how much quota 
that project has, and modify the quota settings for it appropriately.

For obvious reasons, I'd like to keep the API access for this tool to a 
minimum. I know that if all else fails, the goal can be accomplished by giving 
it admin access, so I'm keeping that in my back pocket, but I'd like to exhaust 
all reasonable options first.

Allowing the tool to see project memberships and get project information is 
easy. The quota part, however, is not. I'm not sure how to accomplish that 
delegation, or how to give the tool admin-equivalent access for a very small 
subset of the APIs. I'm unfamiliar with Keystone trusts and am not sure it 
would be appropriate here anyway, because it would seem like I'd need to 
delegate admin control to the role user in order to allow quota get/set. The 
only other thing I can think of, and it seems really off the wall to me, is to:

* create a local domain in Keystone
* create one user in this local domain per every Keystone project and add it to 
that project
* give this user a special role that allows it to set quotas for its project
* set up a massive many-to-one web of trusts where all of these users are 
delegated back to the tool's account

This solution seems very convoluted, and the number of trusts the tool will 
need to know about is going to grow linearly with the number of projects in 
Keystone.

The clusters in question are all running Liberty, with Keystone v3 available. 
Keystone is in a single-domain configuration, where the default domain is 
sourcing users from LDAP and all other information is stored in SQL.

Anyone have any thoughts, or am I SOL and just have to give this thing admin 
access and make sure the creds stay under lock and key?

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Delegating quota management for all projects to a user without the admin role?

2017-01-26 Thread Tim Bell
I think you could do something with policy.json to define which operations a 
given role would have access to. We have used this to provide the centre 
operator with abilities such as stop/start. The technical details are described 
at https://openstack-in-production.blogspot.fr/2015/02/delegation-of-roles.html.

Tim

From: "Edmund Rhudy (BLOOMBERG/ 120 PARK)" 
Reply-To: Edmund Rhudy 
Date: Friday, 27 January 2017 at 00:36
To: openstack-operators 
Subject: [Openstack-operators] Delegating quota management for all projects to 
a user without the admin role?

I'm looking for a way to allow a user that does not have the admin role to be 
able to view and set quota (both Nova/Cinder) for all projects in an OpenStack 
cluster. For us, the boundary of a Keystone region is coterminous with an 
OpenStack cluster - we don't currently use any sort of federated Keystone.

Background: we are involved in a project (not the Keystone variety) for 
integrating Bloomberg's internal budget processes closely with purchasing 
compute resources. The idea of this system is that you will purchase some 
number of standardized compute units and then you can allocate them to projects 
in various OpenStack clusters as you wish. In order to do this, the tool needs 
to be able to see what Keystone projects you have access to, see how much quota 
that project has, and modify the quota settings for it appropriately.

For obvious reasons, I'd like to keep the API access for this tool to a 
minimum. I know that if all else fails, the goal can be accomplished by giving 
it admin access, so I'm keeping that in my back pocket, but I'd like to exhaust 
all reasonable options first.

Allowing the tool to see project memberships and get project information is 
easy. The quota part, however, is not. I'm not sure how to accomplish that 
delegation, or how to give the tool admin-equivalent access for a very small 
subset of the APIs. I'm unfamiliar with Keystone trusts and am not sure it 
would be appropriate here anyway, because it would seem like I'd need to 
delegate admin control to the role user in order to allow quota get/set. The 
only other thing I can think of, and it seems really off the wall to me, is to:

* create a local domain in Keystone
* create one user in this local domain per every Keystone project and add it to 
that project
* give this user a special role that allows it to set quotas for its project
* set up a massive many-to-one web of trusts where all of these users are 
delegated back to the tool's account

This solution seems very convoluted, and the number of trusts the tool will 
need to know about is going to grow linearly with the number of projects in 
Keystone.

The clusters in question are all running Liberty, with Keystone v3 available. 
Keystone is in a single-domain configuration, where the default domain is 
sourcing users from LDAP and all other information is stored in SQL.

Anyone have any thoughts, or am I SOL and just have to give this thing admin 
access and make sure the creds stay under lock and key?
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] What would you like in Pike?

2017-01-19 Thread Tim Bell
Blair,

Sure.

I’d also be happy for a second volunteer to share it with so that we get a 
rounded perspective, it is important that we don’t get too influenced by one 
organisation’s use cases.

Tim

From: Blair Bethwaite <blair.bethwa...@gmail.com>
Date: Thursday, 19 January 2017 at 20:24
To: Tim Bell <tim.b...@cern.ch>
Cc: "m...@mattjarvis.org.uk" <m...@mattjarvis.org.uk>, openstack-operators 
<openstack-operators@lists.openstack.org>
Subject: Re: [Openstack-operators] What would you like in Pike?

Hi Tim,

We did wonder in last week's meeting whether quota management and nested 
project support (particularly which flows are most important) would be a good 
session for the Boston Forum...? Would you be willing to lead such a discussion?

Cheers,

On 19 January 2017 at 19:59, Tim Bell 
<tim.b...@cern.ch<mailto:tim.b...@cern.ch>> wrote:

On 18 Jan 2017, at 23:20, Matt Jarvis 
<m...@mattjarvis.org.uk<mailto:m...@mattjarvis.org.uk>> wrote:

I think one of the problems we're seeing now is that a lot of operators have 
actually already scratched some of these missing functionality itches like 
quota management and project nesting by handling those scenarios in external 
management systems. I know we certainly did at DataCentred. That probably means 
these things don't surface enough to upstream as requirements, whereas for new 
users who aren't necessarily in the loop with community communication they may 
well be creating friction to adoption.


For the quota management, I think the first discussions were in the Hong Kong 
summit around the Boson project and this has moved backwards and forwards 
between services, libraries and improving the code. While the user need is 
relatively simple to state, these are not simple problems to solve so it is 
often difficult for the items to get to the priority lists.

One of the difficulties we have found is that we could get staff for a project 
such as quota management for a short period (e.g. 1 year). However, from the 
initial specification to code acceptance is often an extended period so these 
sort of changes can get stalled but the people contributing need to show 
results for their work (such as a thesis).

From the scientific working group discussions, the quota and nesting 
discussions have come up regularly so the requirements are still there.

Tim


On Wed, Jan 18, 2017 at 10:06 PM, Sam Morrison 
<sorri...@gmail.com<mailto:sorri...@gmail.com>> wrote:
I would love it if all the projects policy.json was actually usable. Too many 
times the policy.json isn’t the only place where authN happens with lots of 
hard coded is_admin etc.

Just the ability to to have a certain role to a certain thing would be amazing. 
It makes it really hard to have read only users to generate reports with that 
we can show our funders how much people use our openstack cloud.

Cheers,
Sam
(non-enterprise)



On 18 Jan 2017, at 6:10 am, Melvin Hillsman 
<mrhills...@gmail.com<mailto:mrhills...@gmail.com>> wrote:

Well said, as a consequence of this thread being on the mailing list, I hope 
that we can get all operators, end-users, and app-developers to respond. If you 
are aware of folks who do not fall under the "enterprise" label please 
encourage them directly to respond; I would encourage everyone to do the same.

On Tue, Jan 17, 2017 at 11:52 AM, Silence Dogood 
<m...@nycresistor.com<mailto:m...@nycresistor.com>> wrote:
I can see a huge problem with your contributing operators... all of them are 
enterprise.

enterprise needs are radically different from small to medium deployers who 
openstack has traditionally failed to work well for.

On Tue, Jan 17, 2017 at 12:47 PM, Piet Kruithof 
<pkruitho...@gmail.com<mailto:pkruitho...@gmail.com>> wrote:
Sorry for the late reply, but wanted to add a few things.

OpenStack UX did suggest to the foundation that the community needs a second 
survey that focuses exclusively on operators.  The rationale was that the user 
survey is primarily focused on marketing data and there isn't really a ton of 
space for additional questions that focuses exclusively on operators. We also 
recommended a second survey called a MaxDiff study that enabled operators to 
identify areas of improvement and also rate them in order of importance 
including distance.

There is also an etherpad that asked operators three priorities for OpenStack:

https://etherpad.openstack.org/p/mitaka-openstackux-enterprise-goals

It was distributed about a year ago, so I'm not sure how much of it was 
relevant.  The list does include responses from folks at TWC, Walmart, Pacific 
Northwest Labs, BestBuy, Comcast, NTTi3 and the US government. It might be a 
good place for the group to add their own improvements as well as "+" other 
peoples suggestions.

There is also a list of studies that have been conducted with operators on 
behalf of the community. The 

Re: [Openstack-operators] What would you like in Pike?

2017-01-19 Thread Tim Bell
ULTS: SEARCHLIGHT/HORIZON INTEGRATION
Why this research matters:
The Searchlight plug-in for Horizon aims to provide a consistent search API 
across OpenStack resources. To validate its suitability and ease of use, we 
evaluated it with cloud operators who use Horizon in their role.

Study design:
Five operators performed tasks that explored Searchlight’s filters, full-text 
capability, and multi-term search.

https://docs.google.com/presentation/d/1TfF2sm98Iha-bNwBJrCTCp6k49zde1Z8I9Qthx1moIM/edit?usp=sharing



___
CLOUD OPERATOR INTERVIEWS: QUOTA MANAGEMENT AT PRODUCTION SCALE
Why this research matters:
The study was initiated following operator feedback identifying quotas as a 
challenge to manage at scale.

Study design:
One-on-one interviews with cloud operators sought to understand their methods 
for managing quotas at production scale.

https://docs.google.com/presentation/d/1J6-8MwUGGOwy6-A_w1EaQcZQ1Bq2YWeB-kw4vCFxbwM/edit



___
CLOUD OPERATOR INTERVIEWS: INFORMATION NEEDS
Why this research matters:
Documentation has been consistently identified as an issue by operators during 
the user survey.  However, we wanted to understand the entire workflow 
associated with identifying and consuming information to resolve issues 
associated with operating production clouds.

Study design:
This research consisted of interviews with seven cloud operators from different 
industries with varying levels of experience to determine how they find 
solutions to problems that arise.

https://docs.google.com/presentation/d/1LKxQx4Or4qOBwPQbt4jAZncGCLlk_Ez8ZRB_bGp19JU/edit?usp=sharing



___
OPERATOR INTERVIEWS: DEPLOYMENT AT PRODUCTION
Why this research matters:
Deployment has been consistently identified as an issue by operators during the 
user survey and impacts both adoption and operations of OpenStack.  We wanted 
to do a deep dive with operators to identify the specific issues impacting 
deployment.

Study design:
A series of 1:1 interviews that included included discussions around 
organizations, tools, workflows and pain points associated with deploying 
OpenStack.

https://docs.google.com/presentation/d/14UerMR4HrXKP_0NE_C-WJ16YQFzgetL1Tmym9FNFzpY/edit?usp=sharing


___
OPERATOR USABILITY: OPENSTACKCLIENT
Why this research matters:
Consistency across projects has been identified as an issue in the user survey.

Study design:
This usability study, conducted at the OpenStack Austin Summit, observed 10 
operators as they attempted to perform standard tasks in the OpenStack client.

https://docs.google.com/presentation/d/1cBUJuLL9s7JQppVlDBBJMrNNpfqwdkHyfZFuwY6lNgM/edit#slide=id.g1a8df2eaf2_1_0










On Tue, Jan 17, 2017 at 10:07 AM, Jonathan Proulx 
<j...@csail.mit.edu<mailto:j...@csail.mit.edu>> wrote:

What Tim said :)

my ordering:

1) Preemptable Instances -- this would be huge and life changing I'd
   give up any other improvements to get this.

2) Deeper utilization of nested projects -- mostly we find ways to
   mange with out this but it would be great to have.

   A) to allow research groups (our internal fiscal/administrative
   divisions) to sub divide quota allocations accordinf to their own
   priorities on a self serve basis (provided proper RBAC configs)
   B) to answer show back questions more easily.  Currently with flat
   projects individual research groups have multiple openstack
   projects, by convention we mange to usually be able to aggregaet
   them in reporting, but deing able to show susage by a parent and
   all it 's childeren woudl be very useful

3) Quota improvements -- this is important but we've learned to deal
   with it

-Jon

On Sat, Jan 14, 2017 at 10:10:40AM +, Tim Bell wrote:
:There are a couple of items which have not been able to make it to the top 
priority for recent releases which would greatly simplify our day to day work 
with the users and make the cloud more flexible. The background use cases are 
described in 
https://openstack-in-production.blogspot.fr/2016/04/resource-management-at-cern.html
:
:
:-  Quota management improvements
:
:oManual interventions are often required to sync the current usage with 
the OpenStack view
:
:oNested projects are now in Keystone but is limited support in other 
projects for the end user benefit, such as delegation of quota for sub-projects
:
:-  Nova pre-emptible instances  
(https://review.openstack.org/#/c/104883/) to give spot market functionality
:
:oWe want to run our cloud at near 100% utilisation but this requires rapid 
ejection of lower priority VMs
:
:That having been said, I also fully support key priorities currently being 
worked on such as cells v2 and placement.
:
:Tim
:
:From: Melvin Hillsman <mrhills...@gmail.com<mailto:mrhills...@gmail.com>>
:Date: Friday, 13 January 2017 at 02:30
:To: openstack-operators 
<openstack-operators@lists.openstack.org<mailto:openstack-operators@lists.openstack.org>>
:Subject: [Openstack-operators] What would you l

Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-17 Thread Tim Bell

On 17 Jan 2017, at 11:28, Maish Saidel-Keesing 
<mais...@maishsk.com<mailto:mais...@maishsk.com>> wrote:


Please see inline.

On 17/01/17 9:36, Tim Bell wrote:

...
Are we really talking about Barbican or has the conversation drifted towards 
Big Tent concerns?

Perhaps we can flip this thread on it’s head and more positively discuss what 
can be done to improve Barbican, or ways that we can collaboratively address 
any issues. I’m almost wondering if some opinions about Barbican are even 
coming from its heavy users, or users who’ve placed much time into 
developing/improving Barbican? If not, let’s collectively change that.


When we started deploying Magnum, there was a pre-req for Barbican to store the 
container engine secrets. We were not so enthusiastic since there was no puppet 
configuration or RPM packaging.  However, with a few upstream contributions, 
these are now all resolved.

the operator documentation has improved, HA deployment is working and the 
unified openstack client support is now available in the latest versions.
Tim - where exactly is this documentation?

We followed the doc for installation at 
http://docs.openstack.org/project-install-guide/newton/, specifically for our 
environment (RDO/CentOS) 
http://docs.openstack.org/project-install-guide/key-manager/newton/

Tim


These extra parts may not be a direct deliverable of the code contributions 
itself but they make a major difference on deployability which Barbican now 
satisfies. Big tent projects should aim to cover these areas also if they wish 
to thrive in the community.

Tim


Thanks,
Kevin


Brandon B. Jozsa

--
Best Regards,
Maish Saidel-Keesing
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Tim Bell

On 17 Jan 2017, at 01:19, Brandon B. Jozsa 
> wrote:

Inline


On January 16, 2017 at 7:04:00 PM, Fox, Kevin M 
(kevin@pnnl.gov) wrote:

I'm not stating that the big tent should be abolished and we go back to the way 
things were. But I also know the status quo is not working either. How do we 
fix this? Anyone have any thoughts?


Are we really talking about Barbican or has the conversation drifted towards 
Big Tent concerns?

Perhaps we can flip this thread on it’s head and more positively discuss what 
can be done to improve Barbican, or ways that we can collaboratively address 
any issues. I’m almost wondering if some opinions about Barbican are even 
coming from its heavy users, or users who’ve placed much time into 
developing/improving Barbican? If not, let’s collectively change that.


When we started deploying Magnum, there was a pre-req for Barbican to store the 
container engine secrets. We were not so enthusiastic since there was no puppet 
configuration or RPM packaging.  However, with a few upstream contributions, 
these are now all resolved.

the operator documentation has improved, HA deployment is working and the 
unified openstack client support is now available in the latest versions.

These extra parts may not be a direct deliverable of the code contributions 
itself but they make a major difference on deployability which Barbican now 
satisfies. Big tent projects should aim to cover these areas also if they wish 
to thrive in the community.

Tim


Thanks,
Kevin


Brandon B. Jozsa

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   4   5   >