Re: [openstack-dev] [nova] Can we deprecate the server backup API please?

2018-11-17 Thread Tim Bell
Mistral can schedule the executions and then a workflow to do the server image 
create. 

The CERN implementation of this is described at 
http://openstack-in-production.blogspot.com/2017/08/scheduled-snapshots.html 
with the implementation at 
https://gitlab.cern.ch/cloud-infrastructure/mistral-workflows. It is pretty 
generic but I don't know if anyone has tried to run it elsewhere.

A few features

- Schedule can be chosen
- Logs visible in Horizon
- Option to shutdown instances before and restart after
- Mails can be sent on success and/or failure
- Rotation of backups to keep a maximum number of copies

There are equivalent restore and clone functions in the workflow also.

Tim
-Original Message-
From: Jay Pipes 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, 16 November 2018 at 20:58
To: "openstack-dev@lists.openstack.org" 
Subject: [openstack-dev] [nova] Can we deprecate the server backup API  please?

The server backup API was added 8 years ago. It has Nova basically 
implementing a poor-man's cron for some unknown reason (probably because 
the original RAX Cloud Servers API had some similar or identical 
functionality, who knows...).

Can we deprecate this functionality please? It's confusing for end users 
to have an `openstack server image create` and `openstack server backup 
create` command where the latter does virtually the same thing as the 
former only sets up some whacky cron-like thing and deletes images after 
some number of rotations.

If a cloud provider wants to offer some backup thing as a service, they 
could implement this functionality separately IMHO, store the user's 
requested cronjob state in their own system (or in glance which is kind 
of how the existing Nova createBackup functionality works), and run a 
simple cronjob executor that ran `openstack server image create` and 
`openstack image delete` as needed.

This is a perfect example of an API that should never have been added to 
the Compute API, in my opinion, and removing it would be a step in the 
right direction if we're going to get serious about cleaning the Compute 
API up.

Thoughts?
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ClusterLabs Developers] [HA] future of OpenStack OCF resource agents (was: resource-agents v4.2.0)

2018-10-24 Thread Tim Bell
Adam,

Personally, I would prefer the approach where the OpenStack resource agents are 
part of the repository in which they are used. This is also the approach taken 
in other open source projects such as Kubernetes and avoids the inconsistency 
where, for example, Azure resource agents are in the Cluster Labs repository 
but OpenStack ones are not. This can mean that people miss there is OpenStack 
integration available.

This does not reflect, in any way, the excellent efforts and results made so 
far. I don't think it would negate the possibility to include testing in the 
OpenStack gate since there are other examples where code is pulled in from 
other sources. 

Tim

-Original Message-
From: Adam Spiers 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 24 October 2018 at 14:29
To: "develop...@clusterlabs.org" , openstack-dev 
mailing list 
Subject: Re: [openstack-dev] [ClusterLabs Developers] [HA] future of OpenStack 
OCF resource agents (was: resource-agents v4.2.0)

[cross-posting to openstack-dev]

Oyvind Albrigtsen  wrote:
>ClusterLabs is happy to announce resource-agents v4.2.0.
>Source code is available at:
>https://github.com/ClusterLabs/resource-agents/releases/tag/v4.2.0
>
>The most significant enhancements in this release are:
>- new resource agents:

[snipped]

> - openstack-cinder-volume
> - openstack-floating-ip
> - openstack-info

That's an interesting development.

By popular demand from the community, in Oct 2015 the canonical
location for OpenStack-specific resource agents became:

https://git.openstack.org/cgit/openstack/openstack-resource-agents/

as announced here:


http://lists.openstack.org/pipermail/openstack-dev/2015-October/077601.html

However I have to admit I have done a terrible job of maintaining it
since then.  Since OpenStack RAs are now beginning to creep into
ClusterLabs/resource-agents, now seems a good time to revisit this and
decide a coherent strategy.  I'm not religious either way, although I
do have a fairly strong preference for picking one strategy which both
ClusterLabs and OpenStack communities can align on, so that all
OpenStack RAs are in a single place.

I'll kick the bikeshedding off:

Pros of hosting OpenStack RAs on ClusterLabs


- ClusterLabs developers get the GitHub code review and Travis CI
  experience they expect.

- Receive all the same maintenance attention as other RAs - any
  changes to coding style, utility libraries, Pacemaker APIs,
  refactorings etc. which apply to all RAs would automatically
  get applied to the OpenStack RAs too.

- Documentation gets built in the same way as other RAs.

- Unit tests get run in the same way as other RAs (although does
  ocf-tester even get run by the CI currently?)

- Doesn't get maintained by me ;-)

Pros of hosting OpenStack RAs on OpenStack infrastructure
-

- OpenStack developers get the Gerrit code review and Zuul CI
  experience they expect.

- Releases and stable/foo branches could be made to align with
  OpenStack releases (..., Queens, Rocky, Stein, T(rains?)...)

- Automated testing could in the future spin up a full cloud
  and do integration tests by simulating failure scenarios,
  as discussed here:

  https://storyboard.openstack.org/#!/story/2002129

  That said, that is still very much work in progress, so
  it remains to be seen when that could come to fruition.

No doubt I've missed some pros and cons here.  At this point
personally I'm slightly leaning towards keeping them in the
openstack-resource-agents - but that's assuming I can either hand off
maintainership to someone with more time, or somehow find the time
myself to do a better job.

What does everyone else think?  All opinions are very welcome,
obviously.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Forum Schedule - Seeking Community Review

2018-10-16 Thread Tim Bell
Jimmy,

While it's not a clash within the forum, there are two sessions for Ironic 
scheduled at the same time on Tuesday at 14h20, each of which has Julia as a 
speaker.

Tim

-Original Message-
From: Jimmy McArthur 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Monday, 15 October 2018 at 22:04
To: "OpenStack Development Mailing List (not for usage questions)" 
, "openstack-operat...@lists.openstack.org" 
, "commun...@lists.openstack.org" 

Subject: [openstack-dev] Forum Schedule - Seeking Community Review

Hi -

The Forum schedule is now up 
(https://www.openstack.org/summit/berlin-2018/summit-schedule/#track=262).  
If you see a glaring content conflict within the Forum itself, please 
let me know.

You can also view the Full Schedule in the attached PDF if that makes 
life easier...

NOTE: BoFs and WGs are still not all up on the schedule.  No need to let 
us know :)

Cheers,
Jimmy


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-27 Thread Tim Bell

Lance,

The comment regarding ‘readers’ is more to explain that the distinction between 
‘admin’ and ‘user’ commands is gradually reducing, where OSC has been 
prioritising ‘user’ commands.

As an example, we give the CERN security team view-only access to many parts of 
the cloud. This allows them to perform their investigations independently.  
Thus, many commands which would be, by default, admin only are also available 
to roles such as the ‘readers’ (e.g. list, show, … of internals or projects 
which they are not in the members list)

I don’t think there is any implications for Keystone (and the readers role is a 
nice improvement to replace the previous manual policy definitions) but more of 
a question of which subcommands we should aim to support in OSC.

The *-manage commands such as nova-manage, I would consider, out of scope for 
OSC. Only admins would be migrating between versions or DB schemas.

Tim

From: Lance Bragstad 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 27 September 2018 at 15:30
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T 
series


On Wed, Sep 26, 2018 at 1:56 PM Tim Bell 
mailto:tim.b...@cern.ch>> wrote:

Doug,

Thanks for raising this. I'd like to highlight the goal "Finish moving legacy 
python-*client CLIs to python-openstackclient" from the etherpad and propose 
this for a T/U series goal.

To give it some context and the motivation:

At CERN, we have more than 3000 users of the OpenStack cloud. We write an 
extensive end user facing documentation which explains how to use the OpenStack 
along with CERN specific features (such as workflows for requesting 
projects/quotas/etc.).

One regular problem we come across is that the end user experience is 
inconsistent. In some cases, we find projects which are not covered by the 
unified OpenStack client (e.g. Manila). In other cases, there are subsets of 
the function which require the native project client.

I would strongly support a goal which targets

- All new projects should have the end user facing functionality fully exposed 
via the unified client
- Existing projects should aim to close the gap within 'N' cycles (N to be 
defined)
- Many administrator actions would also benefit from integration (reader roles 
are end users too so list and show need to be covered too)
- Users should be able to use a single openrc for all interactions with the 
cloud (e.g. not switch between password for some CLIs and Kerberos for OSC)

Sorry to back up the conversation a bit, but does reader role require work in 
the clients? Last release we incorporated three roles by default during 
keystone's installation process [0]. Is the definition in the specification 
what you mean by reader role, or am I on a different page?

[0] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/define-default-roles.html#default-roles

The end user perception of a solution will be greatly enhanced by a single 
command line tool with consistent syntax and authentication framework.

It may be a multi-release goal but it would really benefit the cloud consumers 
and I feel that goals should include this audience also.

Tim

-Original Message-
From: Doug Hellmann mailto:d...@doughellmann.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, 26 September 2018 at 18:00
To: openstack-dev 
mailto:openstack-dev@lists.openstack.org>>, 
openstack-operators 
mailto:openstack-operat...@lists.openstack.org>>,
 openstack-sigs 
mailto:openstack-s...@lists.openstack.org>>
Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T 
series

It's time to start thinking about community-wide goals for the T series.

We use community-wide goals to achieve visible common changes, push for
basic levels of consistency and user experience, and efficiently improve
certain areas where technical debt payments have become too high -
across all OpenStack projects. Community input is important to ensure
that the TC makes good decisions about the goals. We need to consider
the timing, cycle length, priority, and feasibility of the suggested
goals.

If you are interested in proposing a goal, please make sure that before
the summit it is described in the tracking etherpad [1] and that you
have started a mailing list thread on the openstack-dev list about the
proposal so that everyone in the forum session [2] has an opportunity to
consider the details.  The forum session is only one step in the
selection process. See [3] for more details.

Doug

[1] https://etherpad.openstack.org/p/community-goals
[2] https://www.openstack.org/summit/berlin-2018/vote-for-

Re: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-26 Thread Tim Bell

Doug,

Thanks for raising this. I'd like to highlight the goal "Finish moving legacy 
python-*client CLIs to python-openstackclient" from the etherpad and propose 
this for a T/U series goal.

To give it some context and the motivation:

At CERN, we have more than 3000 users of the OpenStack cloud. We write an 
extensive end user facing documentation which explains how to use the OpenStack 
along with CERN specific features (such as workflows for requesting 
projects/quotas/etc.). 

One regular problem we come across is that the end user experience is 
inconsistent. In some cases, we find projects which are not covered by the 
unified OpenStack client (e.g. Manila). In other cases, there are subsets of 
the function which require the native project client.

I would strongly support a goal which targets

- All new projects should have the end user facing functionality fully exposed 
via the unified client
- Existing projects should aim to close the gap within 'N' cycles (N to be 
defined)
- Many administrator actions would also benefit from integration (reader roles 
are end users too so list and show need to be covered too)
- Users should be able to use a single openrc for all interactions with the 
cloud (e.g. not switch between password for some CLIs and Kerberos for OSC)

The end user perception of a solution will be greatly enhanced by a single 
command line tool with consistent syntax and authentication framework.

It may be a multi-release goal but it would really benefit the cloud consumers 
and I feel that goals should include this audience also.

Tim

-Original Message-
From: Doug Hellmann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 26 September 2018 at 18:00
To: openstack-dev , openstack-operators 
, openstack-sigs 

Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T 
series

It's time to start thinking about community-wide goals for the T series.

We use community-wide goals to achieve visible common changes, push for
basic levels of consistency and user experience, and efficiently improve
certain areas where technical debt payments have become too high -
across all OpenStack projects. Community input is important to ensure
that the TC makes good decisions about the goals. We need to consider
the timing, cycle length, priority, and feasibility of the suggested
goals.

If you are interested in proposing a goal, please make sure that before
the summit it is described in the tracking etherpad [1] and that you
have started a mailing list thread on the openstack-dev list about the
proposal so that everyone in the forum session [2] has an opportunity to
consider the details.  The forum session is only one step in the
selection process. See [3] for more details.

Doug

[1] https://etherpad.openstack.org/p/community-goals
[2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814
[3] https://governance.openstack.org/tc/goals/index.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][publiccloud-wg] Proposal to shelve on stop/suspend

2018-09-15 Thread Tim Bell
Found the previous discussion at 
http://lists.openstack.org/pipermail/openstack-operators/2016-August/011321.html
 from 2016.

Tim

-Original Message-
From: Tim Bell 
Date: Saturday, 15 September 2018 at 14:38
To: "OpenStack Development Mailing List (not for usage questions)" 
, "openstack-operat...@lists.openstack.org" 
, "openstack-s...@lists.openstack.org" 

Subject: Re: [openstack-dev] [nova][publiccloud-wg] Proposal to shelve on 
stop/suspend

One extra user motivation that came up during past forums was to have a 
different quota for shelved instances (or remove them from the project quota 
all together). Currently, I believe that a shelved instance still counts 
towards the instances/cores quota thus the reduction of usage by the user is 
not reflected in the quotas.

One discussion at the time was that the user is still reserving IPs so it 
is not zero resource usage and the instances still occupy storage.

(We disabled shelving for other reasons so I'm not able to check easily)

Tim

-Original Message-
From: Matt Riedemann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Saturday, 15 September 2018 at 01:27
To: "OpenStack Development Mailing List (not for usage questions)" 
, "openstack-operat...@lists.openstack.org" 
, "openstack-s...@lists.openstack.org" 

Subject: [openstack-dev] [nova][publiccloud-wg] Proposal to shelve on   
stop/suspend

tl;dr: I'm proposing a new parameter to the server stop (and suspend?) 
APIs to control if nova shelve offloads the server.

Long form: This came up during the public cloud WG session this week 
based on a couple of feature requests [1][2]. When a user 
stops/suspends 
a server, the hypervisor frees up resources on the host but nova 
continues to track those resources as being used on the host so the 
scheduler can't put more servers there. What operators would like to do 
is that when a user stops a server, nova actually shelve offloads the 
server from the host so they can schedule new servers on that host. On 
start/resume of the server, nova would find a new host for the server. 
This also came up in Vancouver where operators would like to free up 
limited expensive resources like GPUs when the server is stopped. This 
is also the behavior in AWS.

The problem with shelve is that it's great for operators but users just 
don't use it, maybe because they don't know what it is and stop works 
just fine. So how do you get users to opt into shelving their server?

I've proposed a high-level blueprint [3] where we'd add a new 
(microversioned) parameter to the stop API with three options:

* auto
* offload
* retain

Naming is obviously up for debate. The point is we would default to 
auto 
and if auto is used, the API checks a config option to determine the 
behavior - offload or retain. By default we would retain for backward 
compatibility. For users that don't care, they get auto and it's fine. 
For users that do care, they either (1) don't opt into the microversion 
or (2) specify the specific behavior they want. I don't think we need 
to 
expose what the cloud's configuration for auto is because again, if you 
don't care then it doesn't matter and if you do care, you can opt out 
of 
this.

"How do we get users to use the new microversion?" I'm glad you asked.

Well, nova CLI defaults to using the latest available microversion 
negotiated between the client and the server, so by default, anyone 
using "nova stop" would get the 'auto' behavior (assuming the client 
and 
server are new enough to support it). Long-term, openstack client plans 
on doing the same version negotiation.

As for the server status changes, if the server is stopped and shelved, 
the status would be 'SHELVED_OFFLOADED' rather than 'SHUTDOWN'. I 
believe this is fine especially if a user is not being specific and 
doesn't care about the actual backend behavior. On start, the API would 
allow starting (unshelving) shelved offloaded (rather than just 
stopped) 
instances. Trying to hide shelved servers as stopped in the API would 
be 
overly complex IMO so I don't want to try and mask that.

It is possible that a user that stopped and shelved their server could 
hit a NoValidHost when starting (unshelving) the server, but that 
really 
shouldn't happen in a cloud that's configuring nova to shelve by 
default 
because if they are doing this, 

Re: [openstack-dev] [nova][publiccloud-wg] Proposal to shelve on stop/suspend

2018-09-15 Thread Tim Bell
One extra user motivation that came up during past forums was to have a 
different quota for shelved instances (or remove them from the project quota 
all together). Currently, I believe that a shelved instance still counts 
towards the instances/cores quota thus the reduction of usage by the user is 
not reflected in the quotas.

One discussion at the time was that the user is still reserving IPs so it is 
not zero resource usage and the instances still occupy storage.

(We disabled shelving for other reasons so I'm not able to check easily)

Tim

-Original Message-
From: Matt Riedemann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Saturday, 15 September 2018 at 01:27
To: "OpenStack Development Mailing List (not for usage questions)" 
, "openstack-operat...@lists.openstack.org" 
, "openstack-s...@lists.openstack.org" 

Subject: [openstack-dev] [nova][publiccloud-wg] Proposal to shelve on   
stop/suspend

tl;dr: I'm proposing a new parameter to the server stop (and suspend?) 
APIs to control if nova shelve offloads the server.

Long form: This came up during the public cloud WG session this week 
based on a couple of feature requests [1][2]. When a user stops/suspends 
a server, the hypervisor frees up resources on the host but nova 
continues to track those resources as being used on the host so the 
scheduler can't put more servers there. What operators would like to do 
is that when a user stops a server, nova actually shelve offloads the 
server from the host so they can schedule new servers on that host. On 
start/resume of the server, nova would find a new host for the server. 
This also came up in Vancouver where operators would like to free up 
limited expensive resources like GPUs when the server is stopped. This 
is also the behavior in AWS.

The problem with shelve is that it's great for operators but users just 
don't use it, maybe because they don't know what it is and stop works 
just fine. So how do you get users to opt into shelving their server?

I've proposed a high-level blueprint [3] where we'd add a new 
(microversioned) parameter to the stop API with three options:

* auto
* offload
* retain

Naming is obviously up for debate. The point is we would default to auto 
and if auto is used, the API checks a config option to determine the 
behavior - offload or retain. By default we would retain for backward 
compatibility. For users that don't care, they get auto and it's fine. 
For users that do care, they either (1) don't opt into the microversion 
or (2) specify the specific behavior they want. I don't think we need to 
expose what the cloud's configuration for auto is because again, if you 
don't care then it doesn't matter and if you do care, you can opt out of 
this.

"How do we get users to use the new microversion?" I'm glad you asked.

Well, nova CLI defaults to using the latest available microversion 
negotiated between the client and the server, so by default, anyone 
using "nova stop" would get the 'auto' behavior (assuming the client and 
server are new enough to support it). Long-term, openstack client plans 
on doing the same version negotiation.

As for the server status changes, if the server is stopped and shelved, 
the status would be 'SHELVED_OFFLOADED' rather than 'SHUTDOWN'. I 
believe this is fine especially if a user is not being specific and 
doesn't care about the actual backend behavior. On start, the API would 
allow starting (unshelving) shelved offloaded (rather than just stopped) 
instances. Trying to hide shelved servers as stopped in the API would be 
overly complex IMO so I don't want to try and mask that.

It is possible that a user that stopped and shelved their server could 
hit a NoValidHost when starting (unshelving) the server, but that really 
shouldn't happen in a cloud that's configuring nova to shelve by default 
because if they are doing this, their SLA needs to reflect they have the 
capacity to unshelve the server. If you can't honor that SLA, don't 
shelve by default.

So, what are the general feelings on this before I go off and start 
writing up a spec?

[1] https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1791681
[2] https://bugs.launchpad.net/openstack-publiccloud-wg/+bug/1791679
[3] https://blueprints.launchpad.net/nova/+spec/shelve-on-stop

-- 

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Re: [openstack-dev] [all] Consistent policy names

2018-09-12 Thread Tim Bell
So +1

Tim

From: Lance Bragstad 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 12 September 2018 at 20:43
To: "OpenStack Development Mailing List (not for usage questions)" 
, OpenStack Operators 

Subject: [openstack-dev] [all] Consistent policy names

The topic of having consistent policy names has popped up a few times this 
week. Ultimately, if we are to move forward with this, we'll need a convention. 
To help with that a little bit I started an etherpad [0] that includes links to 
policy references, basic conventions *within* that service, and some examples 
of each. I got through quite a few projects this morning, but there are still a 
couple left.

The idea is to look at what we do today and see what conventions we can come up 
with to move towards, which should also help us determine how much each 
convention is going to impact services (e.g. picking a convention that will 
cause 70% of services to rename policies).

Please have a look and we can discuss conventions in this thread. If we come to 
agreement, I'll start working on some documentation in oslo.policy so that it's 
somewhat official because starting to renaming policies.

[0] https://etherpad.openstack.org/p/consistent-policy-names
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][puppet] Hello all, puppet modules

2018-05-31 Thread Tim Bell
CERN use these puppet modules too and contributes any missing functionality we 
need upstream.

Tim

From: Alex Schultz 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 31 May 2018 at 16:24
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [tripleo][puppet] Hello all, puppet modules



On Wed, May 30, 2018 at 3:18 PM, Remo Mattei mailto:r...@rm.ht>> 
wrote:
Hello all,
I have talked to several people about this and I would love to get this 
finalized once and for all. I have checked the OpenStack puppet modules which 
are mostly developed by the Red Hat team, as of right now, TripleO is using a 
combo of Ansible and puppet to deploy but in the next couple of releases, the 
plan is to move away from the puppet option.


So the OpenStack puppet modules are maintained by others other than Red Hat, 
however we have been a major contributor since TripleO has relied on them for 
some time.  That being said, as TripleO has migrated to containers built with 
Kolla, we've adapted our deployment mechanism to include Ansible and we really 
only use puppet for configuration generation.  Our goal for TripleO is to 
eventually be fully containerized which isn't something the puppet modules 
support today and I'm not sure is on the road map.


So consequently, what will be the plan of TripleO and the puppet modules?


As TripleO moves forward, we may continue to support deployments via puppet 
modules but the amount of testing that we'll be including upstream will mostly 
exercise external Ansible integrations (example, ceph-ansible, 
openshift-ansible, etc) and Kolla containers.  As of Queens, most of the 
services deployed via TripleO are deployed via containers and not on baremetal 
via puppet. We no longer support deploying OpenStack services on baremetal via 
the puppet modules and will likely be removing this support in the code in 
Stein.  The end goal will likely be moving away from puppet modules within 
TripleO if we can solve the backwards compatibility and configuration 
generation via other mechanism.  We will likely recommend leveraging external 
Ansible role calls rather than including puppet modules and using those to 
deploy services that are not inherently supported by TripleO.  I can't really 
give a time frame as we are still working out the details, but it is likely 
that over the next several cycles we'll see a reduction in the dependence of 
puppet in TripleO and an increase in leveraging available Ansible roles.


From the Puppet OpenStack standpoint, others are stepping up to continue to 
ensure the modules are available and I know I'll keep an eye on them for as 
long as TripleO leverages some of the functionality.  The Puppet OpenStack 
modules are very stable but I'm not sure without additional community folks 
stepping up that there will be support for newer functionality being added by 
the various OpenStack projects.  I'm sure others can chime in here on their 
usage/plans for the Puppet OpenStack modules.


Hope that helps.


Thanks,
-Alex


Thanks

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

2018-05-24 Thread Tim Bell
I'd like to understand the phrase "StarlingX is an OpenStack Foundation Edge 
focus area project".

My understanding of the current situation is that "StarlingX would like to be 
OpenStack Foundation Edge focus area project".

I have not been able to keep up with all of the discussions so I'd be happy for 
further URLs to help me understand the current situation and the processes 
(formal/informal) to arrive at this conclusion.

Tim

-Original Message-
From: Dean Troyer 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 23 May 2018 at 11:08
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [StarlingX] StarlingX code followup discussions

On Wed, May 23, 2018 at 11:49 AM, Colleen Murphy  
wrote:
> It's also important to make the distinction between hosting something on 
openstack.org infrastructure and recognizing it in an official capacity. 
StarlingX is seeking both, but in my opinion the code hosting is not the 
problem here.

StarlingX is an OpenStack Foundation Edge focus area project and is
seeking to use the CI infrastructure.  There may be a project or two
contained within that may make sense as OpenStack projects in the
not-called-big-tent-anymore sense but that is not on the table, there
is a lot of work to digest before we could even consider that.  Is
that the official capacity you are talking about?

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-20

2018-05-15 Thread Tim Bell
From my memory, the LCOO was started in 2015 or 2016. The UC was started at the 
end of 2012, start of 2013 (https://www.openstack.org/blog/?p=3777) with Ryan, 
JC and I.

Tim

-Original Message-
From: Graham Hayes 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, 15 May 2018 at 18:22
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-20

..

> # LCOO
> 
> There's been some concern expressed about the The Large Contributing
> OpenStack Operators (LCOO) group and the way they operate. They use
> an [Atlassian Wiki](https://openstack-lcoo.atlassian.net/) and
> Slack, and have restricted membership. These things tend to not
> align with the norms for tool usage and collaboration in OpenStack.
> This topic came up in [late
> 
April](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-26.log.html#t2018-04-26T14:39:36)
> 
> but is worth revisiting in Vancouver.

From what I understand, this group came into being before the UC was
created - a joint UC/TC/LCOO sync up in Vancouver is probably a good
idea.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova] Default scheduler filters survey

2018-05-01 Thread Tim Bell
You may also need something like pre-emptible instances to arrange the clean up 
of opportunistic VMs when the owner needs his resources back. Some details on 
the early implementation at 
http://openstack-in-production.blogspot.fr/2018/02/maximizing-resource-utilization-with.html.

If you're in Vancouver, we'll be having a Forum session on this 
(https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21787/pre-emptible-instances-the-way-forward)
 and notes welcome on the etherpad 
(https://etherpad.openstack.org/p/YVR18-pre-emptible-instances)

It would be good to find common implementations since this is a common scenario 
in the academic and research communities.

Tim

-Original Message-
From: Dave Holland 
Date: Tuesday, 1 May 2018 at 10:40
To: Mathieu Gagné 
Cc: "OpenStack Development Mailing List (not for usage questions)" 
, openstack-operators 

Subject: Re: [Openstack-operators] [openstack-dev] [nova] Default scheduler 
filters survey

On Mon, Apr 30, 2018 at 12:41:21PM -0400, Mathieu Gagné wrote:
> Weighers for baremetal cells:
> * ReservedHostForTenantWeigher [7]
...
> [7] Used to favor reserved host over non-reserved ones based on project.

Hello Mathieu,

we are considering writing something like this, for virtual machines not
for baremetal. Our use case is that a project buying some compute
hardware is happy for others to use it, but when the compute "owner"
wants sole use of it, other projects' instances must be migrated off or
killed; a scheduler weigher like this might help us to minimise the
number of instances needing migration or termination at that point.
Would you be willing to share your source code please?

thanks,
Dave
-- 
** Dave Holland ** Systems Support -- Informatics Systems Group **
** 01223 496923 **Wellcome Sanger Institute, Hinxton, UK**


-- 
 The Wellcome Sanger Institute is operated by Genome Research 
 Limited, a charity registered in England with number 1021457 and a 
 company registered in England with number 2742969, whose registered 
 office is 215 Euston Road, London, NW1 2BE. 

___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

2018-04-26 Thread Tim Bell
My worry with changing the default is that it would be like adding the 
following in /etc/environment,

alias ls=' rm -rf / --no-preserve-root'

i.e. an operation which was previously read-only now becomes irreversible.

We also have current use cases with Ironic where we are moving machines between 
projects by 'disowning' them to the spare pool and then reclaiming them (by 
UUID) into new projects with the same state.

However, other operators may feel differently which is why I suggest asking 
what people feel about changing the default.

In any case, changes in default behaviour need to be highly visible.

Tim

-Original Message-
From: "arkady.kanev...@dell.com" <arkady.kanev...@dell.com>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Date: Thursday, 26 April 2018 at 18:48
To: "openstack-dev@lists.openstack.org" <openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

+1.
It would be good to also identify the use cases.
Surprised that node should be cleaned up automatically.
I would expect that we want it to be a deliberate request from 
administrator to do.
Maybe user when they "return" a node to free pool after baremetal usage.
Thanks,
    Arkady

-Original Message-
From: Tim Bell [mailto:tim.b...@cern.ch] 
Sent: Thursday, April 26, 2018 11:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

How about asking the operators at the summit Forum or asking on 
openstack-operators to see what the users think?

Tim

-Original Message-
From: Ben Nemec <openst...@nemebean.com>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Date: Thursday, 26 April 2018 at 17:39
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>, Dmitry Tantsur <dtant...@redhat.com>
Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by default?



On 04/26/2018 09:24 AM, Dmitry Tantsur wrote:
> Answering to both James and Ben inline.
> 
> On 04/25/2018 05:47 PM, Ben Nemec wrote:
>>
>>
>> On 04/25/2018 10:28 AM, James Slagle wrote:
>>> On Wed, Apr 25, 2018 at 10:55 AM, Dmitry Tantsur 
>>> <dtant...@redhat.com> wrote:
>>>> On 04/25/2018 04:26 PM, James Slagle wrote:
>>>>>
>>>>> On Wed, Apr 25, 2018 at 9:14 AM, Dmitry Tantsur 
<dtant...@redhat.com>
>>>>> wrote:
>>>>>>
>>>>>> Hi all,
>>>>>>
>>>>>> I'd like to restart conversation on enabling node automated 
>>>>>> cleaning by
>>>>>> default for the undercloud. This process wipes partitioning 
tables
>>>>>> (optionally, all the data) from overcloud nodes each time they 
>>>>>> move to
>>>>>> "available" state (i.e. on initial enrolling and after each tear 
>>>>>> down).
>>>>>>
>>>>>> We have had it disabled for a few reasons:
>>>>>> - it was not possible to skip time-consuming wiping if data from 
>>>>>> disks
>>>>>> - the way our workflows used to work required going between 
>>>>>> manageable
>>>>>> and
>>>>>> available steps several times
>>>>>>
>>>>>> However, having cleaning disabled has several issues:
>>>>>> - a configdrive left from a previous deployment may confuse 
>>>>>> cloud-init
>>>>>> - a bootable partition left from a previous deployment may take
>>>>>> precedence
>>>>>> in some BIOS
>>>>>> - an UEFI boot partition left from a previous deployment is 
likely to
>>>>>> confuse UEFI firmware
>>>>>> - apparently ceph does not work correctly without cleaning (I'll 
>>>>>> defer to
>>>>>> the storage team to comment)
>>>>>>
>>>>>> For these reasons we don't recommend ha

Re: [openstack-dev] [tripleo] ironic automated cleaning by default?

2018-04-26 Thread Tim Bell
How about asking the operators at the summit Forum or asking on 
openstack-operators to see what the users think?

Tim

-Original Message-
From: Ben Nemec 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 26 April 2018 at 17:39
To: "OpenStack Development Mailing List (not for usage questions)" 
, Dmitry Tantsur 
Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by default?



On 04/26/2018 09:24 AM, Dmitry Tantsur wrote:
> Answering to both James and Ben inline.
> 
> On 04/25/2018 05:47 PM, Ben Nemec wrote:
>>
>>
>> On 04/25/2018 10:28 AM, James Slagle wrote:
>>> On Wed, Apr 25, 2018 at 10:55 AM, Dmitry Tantsur 
>>>  wrote:
 On 04/25/2018 04:26 PM, James Slagle wrote:
>
> On Wed, Apr 25, 2018 at 9:14 AM, Dmitry Tantsur 
> wrote:
>>
>> Hi all,
>>
>> I'd like to restart conversation on enabling node automated 
>> cleaning by
>> default for the undercloud. This process wipes partitioning tables
>> (optionally, all the data) from overcloud nodes each time they 
>> move to
>> "available" state (i.e. on initial enrolling and after each tear 
>> down).
>>
>> We have had it disabled for a few reasons:
>> - it was not possible to skip time-consuming wiping if data from 
>> disks
>> - the way our workflows used to work required going between 
>> manageable
>> and
>> available steps several times
>>
>> However, having cleaning disabled has several issues:
>> - a configdrive left from a previous deployment may confuse 
>> cloud-init
>> - a bootable partition left from a previous deployment may take
>> precedence
>> in some BIOS
>> - an UEFI boot partition left from a previous deployment is likely to
>> confuse UEFI firmware
>> - apparently ceph does not work correctly without cleaning (I'll 
>> defer to
>> the storage team to comment)
>>
>> For these reasons we don't recommend having cleaning disabled, and I
>> propose
>> to re-enable it.
>>
>> It has the following drawbacks:
>> - The default workflow will require another node boot, thus becoming
>> several
>> minutes longer (incl. the CI)
>> - It will no longer be possible to easily restore a deleted overcloud
>> node.
>
>
> I'm trending towards -1, for these exact reasons you list as
> drawbacks. There has been no shortage of occurrences of users who have
> ended up with accidentally deleted overclouds. These are usually
> caused by user error or unintended/unpredictable Heat operations.
> Until we have a way to guarantee that Heat will never delete a node,
> or Heat is entirely out of the picture for Ironic provisioning, then
> I'd prefer that we didn't enable automated cleaning by default.
>
> I believe we had done something with policy.json at one time to
> prevent node delete, but I don't recall if that protected from both
> user initiated actions and Heat actions. And even that was not enabled
> by default.
>
> IMO, we need to keep "safe" defaults. Even if it means manually
> documenting that you should clean to prevent the issues you point out
> above. The alternative is to have no way to recover deleted nodes by
> default.


 Well, it's not clear what is "safe" here: protect people who explicitly
 delete their stacks or protect people who don't realize that a previous
 deployment may screw up their new one in a subtle way.
>>>
>>> The latter you can recover from, the former you can't if automated
>>> cleaning is true.
> 
> Nor can we recover from 'rm -rf / --no-preserve-root', but it's not a 
> reason to disable the 'rm' command :)
> 
>>>
>>> It's not just about people who explicitly delete their stacks (whether
>>> intentional or not). There could be user error (non-explicit) or
>>> side-effects triggered by Heat that could cause nodes to get deleted.
> 
> If we have problems with Heat, we should fix Heat or stop using it. What 
> you're saying is essentially "we prevent ironic from doing the right 
> thing because we're using a tool that can invoke 'rm -rf /' at a wrong 
> moment."
> 
>>>
>>> You couldn't recover from those scenarios if automated cleaning were
>>> true. Whereas you could always fix a deployment error by opting in to
>>> do an automated clean. Does Ironic 

Re: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier?

2018-04-23 Thread Tim Bell

One of the challenges in the academic sector is the time from lightbulb moment 
to code commit. Many of the academic resource opportunities are short term 
(e.g. PhDs, student projects, government funded projects) and there is a 
latency in current system to onboard, get the appropriate recognition in the 
community (such as by reviewing other changes) and then get the code committed. 
 This is a particular problem for the larger projects where the patch is not in 
one of the project goal areas for that release.

Not sure what the solution is but I would agree that there is a significant 
opportunity.

Tim

-Original Message-
From: Thierry Carrez 
Organization: OpenStack
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Monday, 23 April 2018 at 18:11
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [tc] campaign question: How can we make 
contributing to OpenStack easier?

> Where else should we be looking for contributors?

Like other large open source projects, OpenStack has a lot of visibility
in the academic sector. I feel like we are less successful than others
in attracting contributions from there, and we could do a lot better by
engaging with them more directly.

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Default scheduler filters survey

2018-04-18 Thread Tim Bell
I'd suggest asking on the openstack-operators list since there is only a subset 
of operators who follow openstack-dev.

Tim

-Original Message-
From: Chris Friesen 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 18 April 2018 at 18:34
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [nova] Default scheduler filters survey

On 04/18/2018 09:17 AM, Artom Lifshitz wrote:

> To that end, we'd like to know what filters operators are enabling in
> their deployment. If you can, please reply to this email with your
> [filter_scheduler]/enabled_filters (or
> [DEFAULT]/scheduler_default_filters if you're using an older version)
> option from nova.conf. Any other comments are welcome as well :)

RetryFilter
ComputeFilter
AvailabilityZoneFilter
AggregateInstanceExtraSpecsFilter
ComputeCapabilitiesFilter
ImagePropertiesFilter
NUMATopologyFilter
ServerGroupAffinityFilter
ServerGroupAntiAffinityFilter
PciPassthroughFilter


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap

2018-04-04 Thread Tim Bell
How about


  *   As an operator, I’d like to spin up the latest release to check if a 
problem is fixed before reporting a problem upstream

We use this approach frequently with packstack. Ideally (as today with 
packstack), we’d do this inside a VM on a running OpenStack cloud… inception… ☺

Tim

From: Emilien Macchi 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 29 March 2018 at 23:35
To: OpenStack Development Mailing List 
Subject: [openstack-dev] [tripleo] PTG session about All-In-One installer: 
recap & roadmap

Greeting folks,

During the last PTG we spent time discussing some ideas around an All-In-One 
installer, using 100% of the TripleO bits to deploy a single node OpenStack 
very similar with what we have today with the containerized undercloud and what 
we also have with other tools like Packstack or Devstack.

https://etherpad.openstack.org/p/tripleo-rocky-all-in-one

One of the problems that we're trying to solve here is to give a simple tool 
for developers so they can both easily and quickly deploy an OpenStack for 
their needs.

"As a developer, I need to deploy OpenStack in a VM on my laptop, quickly and 
without complexity, reproducing the same exact same tooling as TripleO is 
using."
"As a Neutron developer, I need to develop a feature in Neutron and test it 
with TripleO in my local env."
"As a TripleO dev, I need to implement a new service and test its deployment in 
my local env."
"As a developer, I need to reproduce a bug in TripleO CI that blocks the 
production chain, quickly and simply."

Probably more use cases, but to me that's what came into my mind now.

Dan kicked-off a doc patch a month ago: https://review.openstack.org/#/c/547038/
And I just went ahead and proposed a blueprint: 
https://blueprints.launchpad.net/tripleo/+spec/all-in-one
So hopefully we can start prototyping something during Rocky.

Before talking about the actual implementation, I would like to gather feedback 
from people interested by the use-cases. If you recognize yourself in these 
use-cases and you're not using TripleO today to test your things because it's 
too complex to deploy, we want to hear from you.
I want to see feedback (positive or negative) about this idea. We need to 
gather ideas, use cases, needs, before we go design a prototype in Rocky.

Thanks everyone who'll be involved,
--
Emilien Macchi
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] baremetal firmware lifecycle management

2018-03-30 Thread Tim Bell
We've experienced different firmware update approaches.. this is a wish list 
rather than a requirement since in the end, it can all be scripted if needed. 
Currently, these are manpower intensive and require a lot of co-ordination 
since the upgrade operation has to be performed by the hardware support team 
but the end user defines the intervention window.

a. Some BMC updates can be applied out of band, over the network with 
appropriate BMC rights. It would be very nice if Ironic could orchestrate these 
updates since they can be painful to organise. One aspect of this would be for 
Ironic to orchestrate the updates and keep track of success/failure along with 
the current version of the BMC firmware (maybe as a property?). Typical example 
of this is when a security flaw is found in a particular hardware model BMC and 
we want to update to the latest version given an image provided by the vendor.

b. A set of machines have been delivered but an incorrect BIOS setting is 
found. We want to reflash the BIOSes with the latest BIOS code/settings. This 
would generally be an operation requiring a reboot. We would ask our users to 
follow a procedure at their convenience to do so (within a window) and then we 
would force the change. An inventory of the current version would help to 
identify those who do not do the update and remind them.

c. A disk firmware issue is found. Similar to b) but there is also the 
possibility for partial completion where some disks correctly update but others 
not.

Overall, it would be great if we can find a way to allow self service hardware 
management where the end users can choose the right point to follow the 
firmware update process within a window and then we can force the upgrade if 
they do not do so.

Tim

-Original Message-
From: Julia Kreger 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, 30 March 2018 at 00:09
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [ironic] baremetal firmware lifecycle management

One of the topics that came up at during the Ironic sessions at the
Rocky PTG was firmware management.

During this discussion, we quickly reached the consensus that we
lacked the ability to discuss and reach a forward direction without:

* An understanding of capabilities and available vendor mechanisms
that can be used to consistently determine and assert desired firmware
to a baremetal node. Ideally, we could find a commonality of two or
more vendor mechanisms that can be abstracted cleanly into high level
actions. Ideally this would boil down to something a simple as
"list_firmware()" and "set_firmware()". Additionally there are surely
some caveats we need to understand, such as if the firmware update
must be done in a particular state, and if a particular prior
condition or next action is required for the particular update.

* An understanding of several use cases where a deployed node may need
to have specific firmware applied. We are presently aware of two
cases. The first being specific firmware is needed to match an
approved operational profile. The second being a desire to perform
ad-hoc changes or have new versions of firmware asserted while a node
has already been deployed.

Naturally any insight that can be shared will help the community to
best model the interaction so we can determine next steps and
ultimately implementation details.

-Julia

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] about rebuild instance booted from volume

2018-03-15 Thread Tim Bell
Deleting all snapshots would seem dangerous though...

1. I want to reset my instance to how it was before
2. I'll just do a snapshot in case I need any data in the future
3. rebuild
4. oops

Tim

-Original Message-
From: Ben Nemec 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 15 March 2018 at 20:42
To: Dan Smith 
Cc: "OpenStack Development Mailing List (not for usage questions)" 
, openstack-operators 

Subject: Re: [openstack-dev] [nova] about rebuild instance booted from volume



On 03/15/2018 09:46 AM, Dan Smith wrote:
>> Rather than overload delete_on_termination, could another flag like
>> delete_on_rebuild be added?
> 
> Isn't delete_on_termination already the field we want? To me, that field
> means "nova owns this". If that is true, then we should be able to
> re-image the volume (in-place is ideal, IMHO) and if not, we just
> fail. Is that reasonable?

If that's what the flag means then it seems reasonable.  I got the 
impression from the previous discussion that not everyone was seeing it 
that way though.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] about rebuild instance booted from volume

2018-03-14 Thread Tim Bell
Matt,

To add another scenario and make things even more difficult (sorry (), if the 
original volume has snapshots, I don't think you can delete it.

Tim


-Original Message-
From: Matt Riedemann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 14 March 2018 at 14:55
To: "openstack-dev@lists.openstack.org" , 
openstack-operators 
Subject: Re: [openstack-dev] [nova] about rebuild instance booted from volume

On 3/14/2018 3:42 AM, 李杰 wrote:
> 
>  This is the spec about  rebuild a instance booted from 
> volume.In the spec,there is a
>question about if we should delete the old root_volume.Anyone who 
> is interested in
>booted from volume can help to review this. Any suggestion is 
> welcome.Thank you!
>The link is here.
>Re:the rebuild spec:https://review.openstack.org/#/c/532407/

Copying the operators list and giving some more context.

This spec is proposing to add support for rebuild with a new image for 
volume-backed servers, which today is just a 400 failure in the API 
since the compute doesn't support that scenario.

With the proposed solution, the backing root volume would be deleted and 
a new volume would be created from the new image, similar to how boot 
from volume works.

The question raised in the spec is whether or not nova should delete the 
root volume even if its delete_on_termination flag is set to False. The 
semantics get a bit weird here since that flag was not meant for this 
scenario, it's meant to be used when deleting the server to which the 
volume is attached. Rebuilding a server is not deleting it, but we would 
need to replace the root volume, so what do we do with the volume we're 
replacing?

Do we say that delete_on_termination only applies to deleting a server 
and not rebuild and therefore nova can delete the root volume during a 
rebuild?

If we don't delete the volume during rebuild, we could end up leaving a 
lot of volumes lying around that the user then has to clean up, 
otherwise they'll eventually go over quota.

We need user (and operator) feedback on this issue and what they would 
expect to happen.

-- 

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] PTG Summary

2018-03-12 Thread Tim Bell
My worry with re-running the burn-in every time we do cleaning is for resource 
utilisation. When the machines are running the burn-in, they're not doing 
useful physics so I would want to minimise the number of times this is run over 
the life time of a machine.

It may be possible to do something like the burn in with a dedicated set of 
steps but still use the cleaning state machine.  

Having a cleaning step set (i.e. burn-in means 
cpuburn,memtest,badblocks,benchmark) would make it more friendly for the 
administrator. Similarly, retirement could be done with additional steps such 
as reset2factory.

Tim

-Original Message-
From: Dmitry Tantsur <dtant...@redhat.com>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Date: Monday, 12 March 2018 at 12:47
To: "openstack-dev@lists.openstack.org" <openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [ironic] PTG Summary

Hi Tim,

Thanks for the information.

I personally don't see problems with cleaning running weeks, when needed. 
What 
I'd avoid is replicating the same cleaning machinery but with a different 
name. 
I think we should try to make cleaning work for this case instead.

Dmitry
    
On 03/12/2018 12:33 PM, Tim Bell wrote:
> Julia,
> 
> A basic summary of CERN does burn-in is at 
http://openstack-in-production.blogspot.ch/2018/03/hardware-burn-in-in-cern-datacenter.html
> 
> Given that the burn in takes weeks to run, we'd see it as a different 
step to cleaning (with some parts in common such as firmware upgrades to latest 
levels)
> 
> Tim
> 
> -Original Message-
> From: Julia Kreger <juliaashleykre...@gmail.com>
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
> Date: Thursday, 8 March 2018 at 22:10
> To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
> Subject: [openstack-dev] [ironic] PTG Summary
> 
> ...
>  Cleaning - Burn-in
>  
>  As part of discussing cleaning changes, we discussed supporting a
>  "burn-in" mode where hardware could be left to run load, memory, or
>  other tests for a period of time. We did not have consensus on a
>  generic solution, other than that this should likely involve
>  clean-steps that we already have, and maybe another entry point into
>  cleaning. Since we didn't really have consensus on use cases, we
>  decided the logical thing was to write them down, and then go from
>  there.
>  
>  Action Items:
>  * Community members to document varying burn-in use cases for
>  hardware, as they may vary based upon industry.
>  * Community to try and come up with a couple example clean-steps.
>  
>  
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] PTG Summary

2018-03-12 Thread Tim Bell
Julia,

A basic summary of CERN does burn-in is at 
http://openstack-in-production.blogspot.ch/2018/03/hardware-burn-in-in-cern-datacenter.html

Given that the burn in takes weeks to run, we'd see it as a different step to 
cleaning (with some parts in common such as firmware upgrades to latest levels)

Tim

-Original Message-
From: Julia Kreger 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 8 March 2018 at 22:10
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [ironic] PTG Summary

...
Cleaning - Burn-in

As part of discussing cleaning changes, we discussed supporting a
"burn-in" mode where hardware could be left to run load, memory, or
other tests for a period of time. We did not have consensus on a
generic solution, other than that this should likely involve
clean-steps that we already have, and maybe another entry point into
cleaning. Since we didn't really have consensus on use cases, we
decided the logical thing was to write them down, and then go from
there.

Action Items:
* Community members to document varying burn-in use cases for
hardware, as they may vary based upon industry.
* Community to try and come up with a couple example clean-steps.



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pros and Cons of face-to-face meetings

2018-03-08 Thread Tim Bell
Fully agree with Doug. At CERN, we use video conferencing for 100s, sometimes 
>1000 participants for the LHC experiments, the trick we've found is to fully 
embrace the chat channels (so remote non-native English speakers can provide 
input) and chairs/vectors who can summarise the remote questions 
constructively, with appropriate priority.

This is actually very close to the etherpad approach, we benefit from the local 
bandwidth if available but do not exclude those who do not have it (or the 
language skills to do it in real time).

Tim

-Original Message-
From: Doug Hellmann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 8 March 2018 at 20:00
To: openstack-dev 
Subject: Re: [openstack-dev] Pros and Cons of face-to-face meetings

Excerpts from Jeremy Stanley's message of 2018-03-08 18:34:51 +:
> On 2018-03-08 12:16:18 -0600 (-0600), Jay S Bryant wrote:
> [...]
> > Cinder has been doing this for many years and it has worked
> > relatively well. It requires a good remote speaker and it also
> > requires the people in the room to be sensitive to the needs of
> > those who are remote. I.E. planning topics at a time appropriate
> > for the remote attendees, ensuring everyone speaks up, etc. If
> > everyone, however, works to be inclusive with remote participants
> > it works well.
> > 
> > We have even managed to make this work between separate mid-cycles
> > (Cinder and Nova) in the past before we did PTGs.
> [...]
> 
> I've seen it work okay when the number of remote participants is
> small and all are relatively known to the in-person participants.
> Even so, bridging Doug into the TC discussion at the PTG was
> challenging for all participants.

I agree, and I'll point out I was just across town (snowed in at a
different hotel).

The conversation the previous day with just the 5-6 people on the
release team worked a little bit better, but was still challenging
at times because of audio quality issues.

So, yes, this can be made to work. It's not trivial, though, and
the degree to which it works depends a lot on the participants on
both sides of the connection. I would not expect us to be very
productive with a large number of people trying to be active in the
conversation remotely.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [keystone] [oslo] new unified limit library

2018-03-07 Thread Tim Bell
I think nested quotas would give the same thing, i.e. you have a parent project 
for the group and child projects for the users. This would not need user/group 
quotas but continue with the ‘project owns resources’ approach.

It can be generalised to other use cases like the value add partner or the 
research experiment working groups 
(http://openstack-in-production.blogspot.fr/2017/07/nested-quota-models.html)

Tim

From: Zhipeng Huang <zhipengh...@gmail.com>
Reply-To: "openstack-s...@lists.openstack.org" 
<openstack-s...@lists.openstack.org>
Date: Wednesday, 7 March 2018 at 17:37
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>, openstack-operators 
<openstack-operat...@lists.openstack.org>, "openstack-s...@lists.openstack.org" 
<openstack-s...@lists.openstack.org>
Subject: Re: [Openstack-sigs] [openstack-dev] [keystone] [oslo] new unified 
limit library

This is certainly a feature will make Public Cloud providers very happy :)

On Thu, Mar 8, 2018 at 12:33 AM, Tim Bell 
<tim.b...@cern.ch<mailto:tim.b...@cern.ch>> wrote:
Sorry, I remember more detail now... it was using the 'owner' of the VM as part 
of the policy rather than quota.

Is there a per-user/per-group quota in Nova?

Tim

-Original Message-
From: Tim Bell <tim.b...@cern.ch<mailto:tim.b...@cern.ch>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, 7 March 2018 at 17:29
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [keystone] [oslo] new unified limit library


There was discussion that Nova would deprecate the user quota feature since 
it really didn't fit well with the 'projects own resources' approach and was 
little used. At one point, some of the functionality stopped working and was 
repaired. The use case we had identified goes away if you have 2 level deep 
nested quotas (and we have now worked around it).

Tim
-Original Message-
From: Lance Bragstad <lbrags...@gmail.com<mailto:lbrags...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Wednesday, 7 March 2018 at 16:51
To: 
"openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: Re: [openstack-dev] [keystone] [oslo] new unified limit library



On 03/07/2018 09:31 AM, Chris Friesen wrote:
> On 03/07/2018 08:58 AM, Lance Bragstad wrote:
>> Hi all,
>>
]
>
> 1) Nova currently supports quotas for a user/group tuple that can be
> stricter than the overall quotas for that group.  As far as I know no
> other project supports this.
...
I think the initial implementation of a unified limit pattern is
targeting limits and quotas for things associated to projects. In the
future, we can probably expand on the limit information in keystone to
include user-specific limits, which would be great if nova wants to move
away from handling that kind of stuff.
>
> Chris
>
> 
__
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Zhipeng (Howard) Huang

Standard Engineer
IT Standard & Patent/IT Product Line
Huawei Technologies Co,. Ltd
Email: huangzhip...@huawei.com<mailto:huang

Re: [openstack-dev] [keystone] [oslo] new unified limit library

2018-03-07 Thread Tim Bell
Sorry, I remember more detail now... it was using the 'owner' of the VM as part 
of the policy rather than quota.

Is there a per-user/per-group quota in Nova?

Tim

-Original Message-
From: Tim Bell <tim.b...@cern.ch>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Date: Wednesday, 7 March 2018 at 17:29
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [keystone] [oslo] new unified limit library


There was discussion that Nova would deprecate the user quota feature since 
it really didn't fit well with the 'projects own resources' approach and was 
little used. At one point, some of the functionality stopped working and was 
repaired. The use case we had identified goes away if you have 2 level deep 
nested quotas (and we have now worked around it). 

Tim
-Original Message-
From: Lance Bragstad <lbrags...@gmail.com>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Date: Wednesday, 7 March 2018 at 16:51
To: "openstack-dev@lists.openstack.org" <openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [keystone] [oslo] new unified limit library



On 03/07/2018 09:31 AM, Chris Friesen wrote:
> On 03/07/2018 08:58 AM, Lance Bragstad wrote:
>> Hi all,
>>
]
>
> 1) Nova currently supports quotas for a user/group tuple that can be
> stricter than the overall quotas for that group.  As far as I know no
> other project supports this.
...
I think the initial implementation of a unified limit pattern is
targeting limits and quotas for things associated to projects. In the
future, we can probably expand on the limit information in keystone to
include user-specific limits, which would be great if nova wants to move
away from handling that kind of stuff.
>
> Chris
>
> 
__
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] [oslo] new unified limit library

2018-03-07 Thread Tim Bell

There was discussion that Nova would deprecate the user quota feature since it 
really didn't fit well with the 'projects own resources' approach and was 
little used. At one point, some of the functionality stopped working and was 
repaired. The use case we had identified goes away if you have 2 level deep 
nested quotas (and we have now worked around it). 

Tim
-Original Message-
From: Lance Bragstad 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 7 March 2018 at 16:51
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [keystone] [oslo] new unified limit library



On 03/07/2018 09:31 AM, Chris Friesen wrote:
> On 03/07/2018 08:58 AM, Lance Bragstad wrote:
>> Hi all,
>>
]
>
> 1) Nova currently supports quotas for a user/group tuple that can be
> stricter than the overall quotas for that group.  As far as I know no
> other project supports this.
...
I think the initial implementation of a unified limit pattern is
targeting limits and quotas for things associated to projects. In the
future, we can probably expand on the limit information in keystone to
include user-specific limits, which would be great if nova wants to move
away from handling that kind of stuff.
>
> Chris
>
> __
>
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [tc] Community Goals for Rocky

2018-01-12 Thread Tim Bell
I was reading a tweet from Jean-Daniel and wondering if there would be an 
appropriate community goal regarding support of some of the later API versions 
or whether this would be more of a per-project goal.

https://twitter.com/pilgrimstack/status/951860289141641217

Interesting numbers about customers tools used to talk to our @OpenStack APIs 
and the Keystone v3 compatibility:
- 10% are not KeystoneV3 compatible
- 16% are compatible
- for the rest, the tools documentation has no info

I think Keystone V3 and Glance V2 are the ones with APIs which have moved on 
significantly from the initial implementations and not all projects have been 
keeping up.

Tim

-Original Message-
From: Emilien Macchi 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, 12 January 2018 at 16:51
To: OpenStack Development Mailing List 
Subject: Re: [openstack-dev] [all] [tc] Community Goals for Rocky

Here's a quick update before the weekend:

2 goals were proposed to governance:

Remove mox
https://review.openstack.org/#/c/532361/
Champion: Sean McGinnis (unless someone else steps up)

Ensure pagination links
https://review.openstack.org/#/c/532627/
Champion: Monty Taylor

2 more goals are about to be proposed:

Enable mutable configuration
Champion: ChangBo Guo

Cold upgrades capabilities
Champion: Masayuki Igawa


Thanks everyone for your participation,
We hope to make a vote within the next 2 weeks so we can prepare the
PTG accordingly.

On Tue, Jan 9, 2018 at 10:37 AM, Emilien Macchi  wrote:
> As promised, let's continue the discussion and move things forward.
>
> This morning Thierry brought the discussion during the TC office hour
> (that I couldn't attend due to timezone):
> 
http://eavesdrop.openstack.org/irclogs/%23openstack-tc/latest.log.html#t2018-01-09T09:18:33
>
> Some outputs:
>
> - One goal has been proposed so far.
>
> Right now, we only have one goal proposal: Storyboard Migration. There
> are some concerns about the ability to achieve this goal in 6 months.
> At that point, we think it would be great to postpone the goal to S
> cycle, continue the progress (kudos to Kendall) and fine other goals
> for Rocky.
>
>
> - We still have a good backlog of goals, we're just missing champions.
>
> https://etherpad.openstack.org/p/community-goals
>
> Chris brought up "pagination links in collection resources" in api-wg
> guidelines theme. He said in the past this goal was more a "should"
> than a "must".
> Thierry mentioned privsep migration (done in Nova and Zun). (action,
> ping mikal about it).
> Thierry also brought up the version discovery (proposed by Monty).
> Flavio proposed mutable configuration, which might be very useful for 
operators.
> He also mentioned that IPv6 support goal shouldn't be that far from
> done, but we're currently lacking in CI jobs that test IPv6
> deployments (question for infra/QA, can we maybe document the gap so
> we can run some gate jobs on ipv6 ?)
> (personal note on that one, since TripleO & Puppet OpenStack CI
> already have IPv6 jobs, we can indeed be confident that it shouldn't
> be that hard to complete this goal in 6 months, I guess the work needs
> to happen in the projects layouts).
> Another interesting goal proposed by Thierry, also useful for
> operators, is to move more projects to assert:supports-upgrade tag.
> Thierry said we are probably not that far from this goal, but the
> major lack is in testing.
> Finally, another "simple" goal is to remove mox/mox3 (Flavio said most
> of projects don't use it anymore already).
>
> With that said, let's continue the discussion on these goals, see
> which ones can be actionable and find champions.
>
> - Flavio asked how would it be perceived if one cycle wouldn't have at
> least one community goal.
>
> Thierry said we could introduce multi-cycle goals (Storyboard might be
> a good candidate).
> Chris and Thierry thought that it would be a bad sign for our
> community to not have community goals during a cycle, "loss of
> momentum" eventually.
>
>
> Thanks for reading so far,
>
> On Fri, Dec 15, 2017 at 9:07 AM, Emilien Macchi  
wrote:
>> On Tue, Nov 28, 2017 at 2:22 PM, Emilien Macchi  
wrote:
>> [...]
>>> Suggestions are welcome:
>>> - on the mailing-list, in a new thread per goal [all] [tc] Proposing
>>> goal XYZ for Rocky
>>> - on Gerrit in openstack/governance like Kendall did.
>>
>> Just a fresh reminder about Rocky goals.
>> A few questions that we can ask 

Re: [openstack-dev] [all] Switching to longer development cycles

2017-12-13 Thread Tim Bell
The forums would seem to provide a good opportunity for get togethers during 
the release cycle. With these happening April/May and October/November, there 
could be a good chance for productive team discussions and the opportunities to 
interact with the user/operator community.

There is a risk that deployment to production is delayed, and therefore 
feedback is delayed and the wait for the ‘initial bug fixes before we deploy to 
prod’ gets longer.

If there is consensus, I’d suggest to get feedback from openstack-operators on 
the idea. My initial suspicion is that it would be welcomed, especially by 
those running from distros, but there are many different perspectives.

Tim

From: Amy Marrich 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 13 December 2017 at 18:58
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [all] Switching to longer development cycles

I think Sean has made some really good points with the PTG setting things off 
in the start of the year and conversations carrying over to the Forums and 
their importance. And having a gap at the end of the year as Jay mentioned will 
give time for those still about to do finishing work if needed and if it's 
planned for in the individual projects they can have an earlier 'end' to allow 
for members not being around.

The one year release would help to get 'new' users to adopt a more recent 
release, even if it's the one from the year previously as there is the 
'confidence' that it's been around for a bit and been used by others in 
production. And if projects want to do incrementals they can, if I've read the 
thread correctly. Also those that want the latest will just use master anyways 
as some do currently.

With the move to a yearly cycle I agree with the 1 year cycle for PTLs, though 
if needed perhaps a way to have a co-PTL or a LT could be implemented to help 
with the longer duties?

My 2 cents from the peanut gallery:)

Amy (spotz)

On Wed, Dec 13, 2017 at 11:29 AM, Sean McGinnis 
> wrote:
On Wed, Dec 13, 2017 at 05:16:35PM +, Chris Jones wrote:
> Hey
>
> On 13 December 2017 at 17:12, Jimmy McArthur 
> > wrote:
>
> > Thierry Carrez wrote:
> >
> >> - It doesn't mean that teams can only meet in-person once a year.
> >> Summits would still provide a venue for team members to have an
> >> in-person meeting. I also expect a revival of the team-organized
> >> midcycles to replace the second PTG for teams that need or want to meet
> >> more often.
> >>
> > The PTG seems to allow greater coordination between groups. I worry that
> > going back to an optional mid-cycle would reduce this cross-collaboration,
> > while also reducing project face-to-face time.
>
>
> I can't speak for the Foundation, but I would think it would be good to
> have an official PTG in the middle of the cycle (perhaps neatly aligned
> with some kind of milestone/event) that lets people discuss plans for
> finishing off the release, and early work they want to get started on for
> the subsequent release). The problem with team-organised midcycles (as I'm
> sure everyone remembers), is that there's little/no opportunity for
> cross-project work.
>
> --
> Cheers,
>
> Chris
This was one of my concerns initially too. We may have to see how things go and
course correct once we have a little more data to go on. But the thought (or at
least the hope) was that we could get by with using the one PTG early in the
cycle to get alignment, then though IRC, the mailing list, and the Forums (keep
in mind there will be two Forums within the cycle) we would be able to keep
things going and discuss any cross project concerns.

This may actually get more emphasis on developers attending the Forum. I think
that is one part of our PTG/Design Summit split that has not fully settled the
way we had hoped. The Forum is still encouraged for developers to attend. But I
think the reality has been many companies now just see the Summit as a
marketing event and see no reason to send any developers.

I can say from the last couple Forum experiences, a lot of really good
discussions have happened there. It's really been unfortunate that there were a
lot of key people missing from some of those discussions though. Personally, my
hope with making this change would mean that the likelihood of devs being able
to justify going to the Forum increases.

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] Removal of CloudWatch api

2017-10-04 Thread Tim Bell

Rabi,

I’d suggest to review the proposal with the openstack-operators list who would 
be able to advise on potential impact for their end users.

Tim

From: Rabi Mishra 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 4 October 2017 at 12:50
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [heat] Removal of CloudWatch api

Hi All,

As discussed in the last meeting, here is the ML thead to gather more feedback 
on this.

Background:

Heat support for AWS CloudWatch compatible API (a very minimalistic 
implementation, primarily used for metric data collection for autoscaling, 
before the telemetry services in OpenStack), has been deprecated since Havana 
cycle (may be before that?).  We now have a global alias[1] for 
AWS::CloudWatch::Alarm to use OS::Aodh::Alarm instead.  However, the ability to 
push metrics to ceilometer via heat, using a pre-signed url for CloudWatch api 
endpoint, is still supported for backward compatibility. 
heat-cfntools/cfn-push-stats tool is mainly used from the instances/vms for 
this.

What we plan to do?

We think that CloudWatch api  and related code base has been in heat tree 
without any change for the sole reason above and possibly it's time to remove 
them completely. However, we may not have an alternate way to continue 
providing backward compatibility to users.

What would be the impact?

- Users using AWS::CloudWatch::Alarm and pushing metric data from instances 
using cfn-push-stats would not be able to do so. Templates with these would not 
work any more.

- AWS::ElasticLoadBalancing::LoadBalancer[2] resource which uses 
AWS::CloudWatch::Alarm and cfn-push-stats would not work anymore. We probably 
have to remove this resource too?

Though it seems like a big change, the general opinion is that there would not 
be many users still using them and hence very little risk in removing 
CloudWatch support completely this cycle.

If you think otherwise please let us know:)


[1] 
https://git.openstack.org/cgit/openstack/heat/tree/etc/heat/environment.d/default.yaml#n6
[2] 
https://git.openstack.org/cgit/openstack/heat/tree/heat/engine/resources/aws/lb/loadbalancer.py#n640

Regards,
Rabi Mishra
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][docs][release] Updating the PTI for docs and tarballs

2017-09-30 Thread Tim Bell
Having a PDF (or similar offline copy) was also requested during OpenStack UK 
days event during the executive Q with jbryce.

Tim

-Original Message-
From: Doug Hellmann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Saturday, 30 September 2017 at 17:44
To: openstack-dev 
Subject: Re: [openstack-dev] [tc][docs][release] Updating the PTI for docs  
and tarballs

Excerpts from Monty Taylor's message of 2017-09-30 10:20:08 -0500:
> Hey everybody,
> 
> Oh goodie, I can hear you say, let's definitely spend some time 
> bikeshedding about specific command invocations related to building docs 
> and tarballs!!!
> 
> tl;dr I want to change the PTI for docs and tarball building to be less 
> OpenStack-specific
> 
> The Problem
> ===
> 
> As we work on Zuul v3, there are a set of job definitions that are 
> rather fundamental that can totally be directly shared between Zuul 
> installations whether those Zuuls are working with OpenStack content or 
> not. As an example "tox -epy27" is a fairly standard thing, so a Zuul 
> job called "tox-py27" has no qualities specific to OpenStack and could 
> realistically be used by anyone who uses tox to manage their project.
> 
> Docs and Tarballs builds for us, however, are the following:
> 
> tox -evenv -- python setup.py sdist
> tox -evenv -- python setup.py build_sphinx
> 
> Neither of those are things that are likely to work outside of 
> OpenStack. (The 'venv' tox environment is not a default tox thing)
> 
> I'm going to argue neither of them are actually providing us with much 
> value.
> 
> Tarball Creation
> 
> 
> Tarball creation is super simple. setup_requires is already handled out 
> of band of everything else. Go clone nova into a completely empty system 
> and run python setup.py sdist ... and it works. (actually, nova is big. 
> use something smaller like gertty ...)
> 
> docker run -it --rm python bash -c 'git clone \
>   https://git.openstack.org/openstack/gertty && cd gertty \
>   && python setup.py sdist'
> 
> There is not much value in that tox wrapper - and it's not like it's 
> making it EASIER to run the command. In fact, it's more typing.
> 
> I propose we change the PTI from:
> 
>tox -evenv python setup.py sdist
> 
> to:
> 
>python setup.py sdist
> 
> and then change the gate jobs to use the non-tox form of the command.
> 
> I'd also like to further change it to be explicit that we also build 
> wheels. So the ACTUAL commands that the project should support are:
> 
>python setup.py sdist
>python setup.py bdist_wheel
> 
> All of our projects support this already, so this should be a no-op.
> 
> Notes:
> 
> * Python projects that need to build C extensions might need their pip 
> requirements (and bindep requirements) installed in order to run 
> bdist_wheel. We do not support that broadly at the moment ANYWAY - so 
> I'd like to leave that as an outlier and handle it when we need to 
> handle it.
> 
> * It's *possible* that somewhere we have a repo that has somehow done 
> something that would cause python setup.py sdist or python setup.py 
> bdist_wheel to not work without pip requirements installed. I believe we 
> should consider that a bug and fix it in the project if we find such a 
> thing - but since we use pbr in all of the OpenStack projects, I find it 
> extremely unlikely.
> 
> Governance patch submitted: https://review.openstack.org/508693
> 
> Sphinx Documentation
> 
> 
> Doc builds are more complex - but I think there is a high amount of 
> value in changing how we invoke them for a few reasons.
> 
> a) nobody uses 'tox -evenv -- python setup.py build_sphinx' but us
> b) we decided to use sphinx for go and javascript - but we invoke sphinx 
> differently for each of those (since they naturally don't have tox), 
> meaning we can't just have a "build-sphinx-docs" job and even share it 
> with ourselves.
> c) readthedocs.org is an excellent Open Source site that builds and 
> hosts sphinx docs for projects. They have an interface for docs 
> requirements documented and defined that we can align. By aligning, 
> projects can use migrate between docs.o.o and readthedocs.org and still 
> have a consistent experience.
> 
> The PTI I'd like to propose for this is more complex, so I'd like to 
> describe it in terms of:
> 
> - OpenStack organizational requirements
> - helper sugar for developers with per-language recommendations
> 

Re: [openstack-dev] [tc][masakari] new project teams application for Masakari

2017-09-01 Thread Tim Bell
Great to see efforts for this use case.

Is there community convergence that Masakari is the solution to address VMs 
high availability?

Tim

-Original Message-
From: Sam P 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, 1 September 2017 at 19:27
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [tc][masakari] new project teams application for   
Masakari

Hi All,

I have just proposed inclusion of Masakari[1] (Instances High Availability
Service) into list of official OpenStack projects in [2]. Regarding this
proposal, I would like to ask OpenStack community for what else can be 
improved
in the project to meet all the necessary requirements.

And I would like use this thread to extend the discussion about project
masakari. It would be great if you can post your comments/questions in [2] 
or in
this thread. I would be happy to discuss and answer to your questions.

I will be at PTG in Denver from 9/12 (Tuesday) to 9/14(Thursday). Other 
Masakari
team members also will be there at PTG. We are happy to discuss anything
regarding to Masakari in PTG.
Please contact us via freenode IRC @ #openstack-masakari, or openstack-dev 
ML
with prefix [masakari].

Thank you.

[1] https://wiki.openstack.org/wiki/Masakari
[2] https://review.openstack.org/#/c/500118/

--- Regards,
Sampath

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][tc][all] Trove restart - next steps

2017-08-16 Thread Tim Bell

Thanks for the info.

Can you give a summary for reasons for why this was not a viable approach?

Tim

From: Amrith Kumar <amrith.ku...@gmail.com>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Date: Tuesday, 15 August 2017 at 23:09
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [trove][tc][all] Trove restart - next steps

Tim,
This is an idea that was discussed at a trove midcycle a long time back (Juno 
midcycle, 2014). It came up briefly in the Kilo midcycle as well but was 
quickly rejected again.
I've added it to the list of topics for discussion at the PTG. If others want 
to add topics to that list, the etherpad is at 
https://etherpad.openstack.org/p/trove-queens-ptg​

Thanks!

-amrith


On Tue, Aug 15, 2017 at 12:43 PM, Tim Bell 
<tim.b...@cern.ch<mailto:tim.b...@cern.ch>> wrote:
One idea I found interesting from the past discussion was the approach that the 
user need is a database with a connection string.

How feasible is the approach where we are provisioning access to a multi-tenant 
database infrastructure rather than deploying a VM with storage and installing 
a database?

This would make the service delivery (monitoring, backup, upgrades) in the 
responsibility of the cloud provider rather than the end user. Some 
quota/telemetry would be needed to allocate costs to the project.

Tim

From: Amrith Kumar <amrith.ku...@gmail.com<mailto:amrith.ku...@gmail.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Date: Tuesday, 15 August 2017 at 17:44
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>>
Subject: [openstack-dev] [trove][tc][all] Trove restart - next steps

Now that we have successfully navigated the Pike release and branched
the tree, I would like to restart the conversation about how to revive
and restart the Trove project.

Feedback from the last go around on this subject[1] resulted in a
lively discussion which I summarized in [2]. The very quick summary is
this, there is interest in Trove, there is a strong desire to maintain
a migration path, there is much that remains to be done to get there.

What didn't come out of the email discussion was any concrete and
tangible uptick in the participation in the project, promises
notwithstanding.

There have however been some new contributors who have been submitting
patches and to help channel their efforts, and any additional
assistance that we may receive, I have created the (below) list of
priorities for the project. These will also be the subject of
discussion at the PTG in Denver.

   - Fix the gate

   - Update currently failing jobs, create xenial based images
   - Fix gate jobs that have gone stale (non-voting, no one paying
 attention)

   - Bug triage

   - Bugs in launchpad are really out of date, assignments to
 people who are no longer active, bugs that are really support
 requests, etc.,
   - Prioritize fixes for Queens and beyond

   - Get more active reviewers

   - There seems to still be a belief that 'contributing' means
 'fixing bugs'. There is much more value in actually doing
 reviews.
   - Get at least a three member active core review team by the
 end of the year.

   - Complete Python 3 support

  - Currently not complete; especially on the guest side

   - Community Goal, migrate to oslo.policy

   - Anything related to new features

This is clearly an opinionated list, and is open to change but I'd
like to do that based on the Agile 'stand up' meeting rules. You know, the 
chicken and pigs thing :)

So, if you'd like to get on board, offer suggestions to change this
list, and then go on to actually implement those changes, c'mon over.
-amrith



[1] http://openstack.markmail.org/thread/wokk73ecv44ipfjz
[2] http://markmail.org/message/gfqext34xh5y37ir

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][tc][all] Trove restart - next steps

2017-08-15 Thread Tim Bell
One idea I found interesting from the past discussion was the approach that the 
user need is a database with a connection string.

How feasible is the approach where we are provisioning access to a multi-tenant 
database infrastructure rather than deploying a VM with storage and installing 
a database?

This would make the service delivery (monitoring, backup, upgrades) in the 
responsibility of the cloud provider rather than the end user. Some 
quota/telemetry would be needed to allocate costs to the project.

Tim

From: Amrith Kumar 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, 15 August 2017 at 17:44
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] [trove][tc][all] Trove restart - next steps

Now that we have successfully navigated the Pike release and branched
the tree, I would like to restart the conversation about how to revive
and restart the Trove project.

Feedback from the last go around on this subject[1] resulted in a
lively discussion which I summarized in [2]. The very quick summary is
this, there is interest in Trove, there is a strong desire to maintain
a migration path, there is much that remains to be done to get there.

What didn't come out of the email discussion was any concrete and
tangible uptick in the participation in the project, promises
notwithstanding.

There have however been some new contributors who have been submitting
patches and to help channel their efforts, and any additional
assistance that we may receive, I have created the (below) list of
priorities for the project. These will also be the subject of
discussion at the PTG in Denver.

   - Fix the gate

   - Update currently failing jobs, create xenial based images
   - Fix gate jobs that have gone stale (non-voting, no one paying
 attention)

   - Bug triage

   - Bugs in launchpad are really out of date, assignments to
 people who are no longer active, bugs that are really support
 requests, etc.,
   - Prioritize fixes for Queens and beyond

   - Get more active reviewers

   - There seems to still be a belief that 'contributing' means
 'fixing bugs'. There is much more value in actually doing
 reviews.
   - Get at least a three member active core review team by the
 end of the year.

   - Complete Python 3 support

  - Currently not complete; especially on the guest side

   - Community Goal, migrate to oslo.policy

   - Anything related to new features

This is clearly an opinionated list, and is open to change but I'd
like to do that based on the Agile 'stand up' meeting rules. You know, the 
chicken and pigs thing :)

So, if you'd like to get on board, offer suggestions to change this
list, and then go on to actually implement those changes, c'mon over.
-amrith



[1] http://openstack.markmail.org/thread/wokk73ecv44ipfjz
[2] http://markmail.org/message/gfqext34xh5y37ir

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA][LCOO] MEX-ops-meetup: OpenStack Extreme Testing

2017-08-14 Thread Tim Bell
+1 for Boris’ suggestion. Many of us use Rally to probe our clouds and have 
significant tooling behind it to integrate with local availability reporting 
and trouble ticketing systems. It would be much easier to deploy new 
functionality such as you propose if it was integrated into an existing project 
framework (such as Rally).

Tim

From: Boris Pavlovic 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Monday, 14 August 2017 at 12:57
To: "OpenStack Development Mailing List (not for usage questions)" 

Cc: openstack-operators 
Subject: Re: [openstack-dev] [QA][LCOO] MEX-ops-meetup: OpenStack Extreme 
Testing

Sam,

Seems like a good plan and huge topic ;)

I would as well suggest to take a look at the similar efforts in OpenStack:
- Failure injection: https://github.com/openstack/os-faults
- Rally Hooks Mechanism (to inject in rally scenarios failures): 
https://rally.readthedocs.io/en/latest/plugins/implementation/hook_and_trigger_plugins.html


Best regards,
Boris Pavlovic


On Mon, Aug 14, 2017 at 2:35 AM, Sam P 
> wrote:
Hi All,

This is a follow up for OpenStack Extreme Testing session[1]
we did in MEX-ops-meetup.

Quick intro for those who were not there:
In this work, we proposed to add new testing framework for openstack.
This framework will provides tool for create tests with destructive
scenarios which will check for High Availability, failover and
recovery of OpenStack cloud.
Please refer the link on top of the [1] for further details.

Follow up:
We are planning periodic irc meeting and have an irc
channel for discussion. I will get back to you with those details soon.

At that session, we did not have time to discuss last 3 items,
Reference architectures
 We are discussing about the reference architecture in [2].

What sort of failures do you see today in your environment?
 Currently we are considering, service failures, backend services (mq,
DB, etc.) failures,
 Network sw failures..etc. To begin with the implementation, we are
considering to start with
 service failures. Please let us know what failures are more frequent
in your environment.

Emulation/Simulation mechanisms, etc.
 Rather than doing actual scale, load, or performance tests, we are
thinking to build a emulation/simulation mechanism
to get the predictions or result of how will openstack behave on such
situations.
This interesting idea was proposed by the Gautam and need more
discussion on this.

Please let us know you questions or comments.

Request to Mike Perez:
 We discussed about synergies with openstack assertion tags and other
efforts to do similar testing in openstack.
 Could you please give some info or pointer of previous discussions.

[1] https://etherpad.openstack.org/p/MEX-ops-extreme-testing
[2] 
https://openstack-lcoo.atlassian.net/wiki/spaces/LCOO/pages/15477787/Extreme+Testing-Vision+Arch

--- Regards,
Sampath

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] How to deal with confusion around "hosted projects"

2017-06-29 Thread Tim Bell

> On 29 Jun 2017, at 17:35, Chris Friesen  wrote:
> 
> On 06/29/2017 09:23 AM, Monty Taylor wrote:
> 
>> We are already WELL past where we can solve the problem you are describing.
>> Pandora's box has been opened - we have defined ourselves as an Open 
>> community.
>> Our only requirement to be official is that you behave as one of us. There is
>> nothing stopping those machine learning projects from becoming official. If 
>> they
>> did become official but were still bad software - what would we have solved?
>> 
>> We have a long-time official project that currently has staffing problems. If
>> someone Googles for OpenStack DBaaS and finds Trove and then looks to see 
>> that
>> the contribution rate has fallen off recently they could get the impression 
>> that
>> OpenStack is a bunch of dead crap.
>> 
>> Inclusion as an Official Project in OpenStack is not an indication that 
>> anyone
>> thinks the project is good quality. That's a decision we actively made. This 
>> is
>> the result.
> 
> I wonder if it would be useful to have a separate orthogonal status as to 
> "level of stability/usefulness/maturity/quality" to help newcomers weed out 
> projects that are under TC governance but are not ready for prime time.
> 

There is certainly a concern on the operator community as to how viable/useful 
a project is (and how to determine this). Adopting too early makes for a very 
difficult discussion with cloud users who rely on the function. 

Can an ‘official’ project be deprecated? The economics say yes. The consumer 
confidence impact would be substantial.

However, home grown solutions where there is common interest implies technical 
debt in the long term.

Tim

> Chris
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-15 Thread Tim Bell
And since Electrons are neither waves or particles, it is difficult to pin them 
down (

https://en.wikipedia.org/wiki/Wave%E2%80%93particle_duality

Tim

-Original Message-
From: Sean McGinnis <sean.mcgin...@gmx.com>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Date: Thursday, 15 June 2017 at 18:36
To: "OpenStack Development Mailing List (not for usage questions)" 
<openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [all][tc] Moving away from "big tent"  
terminology

    On Thu, Jun 15, 2017 at 03:41:30PM +, Tim Bell wrote:
> OpenStack Nucleus and OpenStack Electrons?
> 
> Tim
> 

Hah, love it!


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-15 Thread Tim Bell
OpenStack Nucleus and OpenStack Electrons?

Tim

-Original Message-
From: Thierry Carrez 
Organization: OpenStack
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, 15 June 2017 at 14:57
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [all][tc] Moving away from "big tent"  
terminology

Sean Dague wrote:
> [...]
> I think those are all fine. The other term that popped into my head was
> "Friends of OpenStack" as a way to describe the openstack-hosted efforts
> that aren't official projects. It may be too informal, but I do think
> the OpenStack-Hosted vs. OpenStack might still mix up in people's head.

My original thinking was to call them "hosted projects" or "host
projects", but then it felt a bit incomplete. I kinda like the "Friends
of OpenStack" name, although it seems to imply some kind of vetting that
we don't actually do.

An alternative would be to give "the OpenStack project infrastructure"
some kind of a brand name (say, "Opium", for OpenStack project
infrastructure ultimate madness) and then call the hosted projects
"Opium projects". Rename the Infra team to Opium team, and voilà!

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] revised Postgresql deprecation patch for governance

2017-05-23 Thread Tim Bell
Thanks. It’s more of a question of not leaving people high and dry when they 
have made a reasonable choice in the past based on the choices supported at the 
time.

Tim

On 23.05.17, 21:14, "Sean Dague" <s...@dague.net> wrote:

On 05/23/2017 02:35 PM, Tim Bell wrote:
> Is there a proposal where deployments who chose Postgres on good faith 
can find migration path to a MySQL based solution?

Yes, a migration tool exploration is action #2 in the current proposal.

Also, to be clear, we're not at the stage of removing anything at this
point. We're mostly just signaling to people where the nice paved road
is, and where the gravel road is. It's like the signs in the spring
 on the road where frost heaves are (at least in the North East US).

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] revised Postgresql deprecation patch for governance

2017-05-23 Thread Tim Bell
Is there a proposal where deployments who chose Postgres on good faith can find 
migration path to a MySQL based solution?

Tim

On 23.05.17, 18:35, "Octave J. Orgeron"  wrote:

As OpenStack has evolved and grown, we are ending up with more and more 
MySQL-isms in the code. I'd love to see OpenStack support every database 
out there, but that is becoming more and more difficult. I've tried to 
get OpenStack to work with other databases like Oracle DB, MongoDB, 
TimesTen, NoSQL, and I can tell you that first hand it's not doable 
without making some significant changes. Some services would be easy to 
make more database agnostic, but most would require a lot of reworking. 
I think the pragmatic thing is to do is focus on supporting the MySQL 
dialect with the different engines and clustering technologies that have 
emerged. oslo_db is a great abstraction layer.  We should continue to 
build upon that and make sure that every OpenStack service uses it 
end-to-end. I've already seen plenty of cases where services like 
Barbican and Murano are not using it. I've also seen plenty of use cases 
where core services are using the older methods of connecting to the 
database and re-inventing the wheel to deal with things like retries. 
The more we use oslo_db and make sure that people are consistent with 
it's use and best practices, we better off we'll be in the long-run.

On the topic of doing live upgrades. I think it's a "nice to have" 
feature, but again we need a consistent framework that all services will 
follow. It's already complicated enough with how different services deal 
with parallelism and locking. So if we are going to go down this path 
across even the core services, we need to have a solid solution and 
framework. Otherwise, we'll end up with a hodgepodge of maturity levels 
between services. The expectation from operators is that if you say you 
can do live upgrades, they will expect that to be the case across all of 
OpenStack and not a buffet style feature. We would also have to take 
into consideration larger shops that have more distributed and 
scaled-out control planes. So we need be careful on this as it will have 
a wide impact on development, testing, and operating.

Octave


On 5/23/2017 6:00 AM, Sean Dague wrote:
> On 05/22/2017 11:26 PM, Matt Riedemann wrote:
>> On 5/22/2017 10:58 AM, Sean Dague wrote:
>>> I think these are actually compatible concerns. The current proposal to
>>> me actually tries to address A1 & B1, with a hint about why A2 is
>>> valuable and we would want to do that.
>>>
>>> It feels like there would be a valuable follow on in which A2 & B2 were
>>> addressed which is basically "progressive enhancements can be allowed to
>>> only work with MySQL based backends". Which is the bit that Monty has
>>> been pushing for in other threads.
>>>
>>> This feels like what a Tier 2 support looks like. A basic SQLA and pray
>>> so that if you live behind SQLA you are probably fine (though not
>>> tested), and then test and advanced feature roll out on a single
>>> platform. Any of that work might port to other platforms over time, but
>>> we don't want to make that table stakes for enhancements.
>> I think this is reasonable and is what I've been hoping for as a result
>> of the feedback on this.
>>
>> I think it's totally fine to say tier 1 backends get shiny new features.
>> I mean, hell, compare the libvirt driver in nova to all other virt
>> drivers in nova. New features are written for the libvirt driver and we
>> have to strong-arm them into other drivers for a compatibility story.
>>
>> I think we should turn on postgresql as a backend in one of the CI jobs,
>> as I've noted in the governance change - it could be the nova-next
>> non-voting job which only runs on nova, but we should have something
>> testing this as long as it's around, especially given how easy it is to
>> turn this on in upstream CI (it's flipping a devstack variable).
> Postgresql support shouldn't be in devstack. If we're taking a tier 2
> approach, someone needs to carve out database plugins from devstack and
> pg would be one (as could be galera, etc).
>
> This historical artifact that pg was maintained in devstack, but much
> more widely used backends were not, is part of the issue.
>
> It would also be a good unit test case as to whether there are pg
> focused folks around out there willing to do this basic devstack plugin
> / job setup work.
>
>   -Sean
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [neutron] multi-site forum discussion

2017-05-14 Thread Tim Bell

On 12 May 2017, at 23:38, joehuang 
> wrote:

Hello,

Neutron cells aware is not equal to multi-site. There are lots of multi-site 
deployment options, not limited to nova-cells, whether to use 
Neutron-cells/Nova-cells in multi-site deployments, it's up to cloud operator's 
choice. For the bug[3], it's reasonable to make neutron support cells, but it 
doesn't implicate that multi-site should mandatory adopt neutron-cells.

[3] https://bugs.launchpad.net/neutron/+bug/1690425


There are also a number of site limited deployments which use nova cells to 
support scalability within the site rather than only between sites. CERN has 
around 50 cells for the 2 data centre deployment we have.

There is also no need to guarantee a 1-to-1 mapping between nova cells and 
neutron cells. It may be simpler to do it that way but something based on the 
ML2 subnet would also seem a reasonable way to organise the neutron work while 
many sites use nova cells based on one hardware type per cell, for example.

Tim
Best Regards
Chaoyi Huang (joehuang)

From: Armando M. [arma...@gmail.com]
Sent: 13 May 2017 3:13
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] multi-site forum discussion



On 12 May 2017 at 11:47, Morales, Victor 
> wrote:
Armando,

I noticed that Tricircle is mentioned there.  Shouldn’t be better to extend its 
current functionality or what are the things that are missing there?

Tricircle aims at coordinating independent neutron systems that exist in 
separated openstack deployments. Making Neutron cell-aware will work in the 
context of the same openstack deployment.


Regards,
Victor Morales

From: "Armando M." >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Friday, May 12, 2017 at 1:06 PM
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: [openstack-dev] [neutron] multi-site forum discussion

Hi folks,

At the summit we had a discussion on how to deploy a single neutron system 
across multiple geographical sites [1]. You can find notes of the discussion on 
[2].

One key requirement that came from the discussion was to make Neutron more Nova 
cells friendly. I filed an RFE bug [3] so that we can move this forward on 
Lauchpad.

Please, do provide feedback in case I omitted some other key takeaway.

Thanks,
Armando

[1] 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18757/neutron-multi-site
[2] https://etherpad.openstack.org/p/pike-neutron-multi-site
[3] https://bugs.launchpad.net/neutron/+bug/1690425

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] - RFC cross project request id tracking

2017-05-14 Thread Tim Bell

> On 14 May 2017, at 13:04, Sean Dague  wrote:
> 
> One of the things that came up in a logging Forum session is how much effort 
> operators are having to put into reconstructing flows for things like server 
> boot when they go wrong, as every time we jump a service barrier the 
> request-id is reset to something new. The back and forth between Nova / 
> Neutron and Nova / Glance would be definitely well served by this. Especially 
> if this is something that's easy to query in elastic search.
> 
> The last time this came up, some people were concerned that trusting 
> request-id on the wire was concerning to them because it's coming from random 
> users. We're going to assume that's still a concern by some. However, since 
> the last time that came up, we've introduced the concept of "service users", 
> which are a set of higher priv services that we are using to wrap user 
> requests between services so that long running request chains (like image 
> snapshot). We trust these service users enough to keep on trucking even after 
> the user token has expired for this long run operations. We could use this 
> same trust path for request-id chaining.
> 
> So, the basic idea is, services will optionally take an inbound 
> X-OpenStack-Request-ID which will be strongly validated to the format 
> (req-$uuid). They will continue to always generate one as well. When the 
> context is built (which is typically about 3 more steps down the paste 
> pipeline), we'll check that the service user was involved, and if not, reset 
> the request_id to the local generated one. We'll log both the global and 
> local request ids. All of these changes happen in oslo.middleware, 
> oslo.context, oslo.log, and most projects won't need anything to get this 
> infrastructure.
> 
> The python clients, and callers, will then need to be augmented to pass the 
> request-id in on requests. Servers will effectively decide when they want to 
> opt into calling other services this way.
> 
> This only ends up logging the top line global request id as well as the last 
> leaf for each call. This does mean that full tree construction will take more 
> work if you are bouncing through 3 or more servers, but it's a step which I 
> think can be completed this cycle.
> 
> I've got some more detailed notes, but before going through the process of 
> putting this into an oslo spec I wanted more general feedback on it so that 
> any objections we didn't think about yet can be raised before going through 
> the detailed design.

This is very consistent with what I had understood during the forum session. 
Having a single request id across multiple services as the end user operation 
is performed would be a great help in operations, where we are often using a 
solution like ElasticSearch/Kibana to show logs and interactively query the 
timing and results of a given request id. It would also improve traceability 
during investigations where we are aiming to determine who the initial 
requesting user.

Tim

> 
>   -Sean
> 
> -- 
> Sean Dague
> http://dague.net
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [scientific][nova][cyborg] Special Hardware Forum session

2017-04-25 Thread Tim Bell
I think there will be quite a few ops folk… I can promise at least one ☺

Blair and I can also do a little publicity in 
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18751/future-of-hypervisor-performance-tuning-and-benchmarking
 which is on Tuesday.

Tim

From: Rochelle Grober 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday 25 April 2017 19:11
To: Blair Bethwaite , 
"openstack-dev@lists.openstack.org" , 
openstack-operators 
Cc: Matthew Riedemann , huangzhipeng 

Subject: Re: [openstack-dev] [scientific][nova][cyborg] Special Hardware Forum 
session


I know that some cyborg folks and nova folks are planning to be there. Now we 
need to drive some ops folks.


Sent from HUAWEI AnyOffice
From:Blair Bethwaite
To:openstack-dev@lists.openstack.org,openstack-oper.
Date:2017-04-25 08:24:34
Subject:[openstack-dev] [scientific][nova][cyborg] Special Hardware Forum 
session

Hi all,

A quick FYI that this Forum session exists:
https://www.openstack.org/summit/boston-2017/summit-schedule/events/18803/special-hardware
(https://etherpad.openstack.org/p/BOS-forum-special-hardware) is a
thing this Forum.

It would be great to see a good representation from both the Nova and
Cyborg dev teams, and also ops ready to share their experience and
use-cases.

--
Cheers,
~Blairo

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][sfc][fwaas][taas][horizon] where would we like to have horizon dashboard for neutron stadium projects?

2017-04-11 Thread Tim Bell
Are there any implications for the end user experience by going to different 
repos (such as requiring dedicated menu items)?

Tim

From: "Sridar Kandaswamy (skandasw)" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Tuesday, 11 April 2017 at 17:01
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [neutron][sfc][fwaas][taas][horizon] where would 
we like to have horizon dashboard for neutron stadium projects?

Hi All:

From and FWaaS perspective – we also think (a)  would be ideal.

Thanks

Sridar

From: Kevin Benton >
Reply-To: OpenStack List 
>
Date: Monday, April 10, 2017 at 4:20 PM
To: OpenStack List 
>
Subject: Re: [openstack-dev] [neutron][sfc][fwaas][taas][horizon] where would 
we like to have horizon dashboard for neutron stadium projects?

I think 'a' is probably the way to go since we can mainly rely on existing 
horizon guides for creating new dashboard repos.

On Apr 10, 2017 08:11, "Akihiro Motoki" 
> wrote:
Hi neutrinos (and horizoners),

As the title says, where would we like to have horizon dashboard for
neutron stadium projects?
There are several projects under neutron stadium and they are trying
to add dashboard support.

I would like to raise this topic again. No dashboard support lands since then.
Also Horizon team would like to move in-tree neutron stadium dashboard
(VPNaaS and FWaaS v1 dashboard) to outside of horizon repo.

Possible approaches


Several possible options in my mind:
(a) dashboard repository per project
(b) dashboard code in individual project
(c) a single dashboard repository for all neutron stadium projects

Which one sounds better?

Pros and Cons


(a) dashboard repository per project
  example, networking-sfc-dashboard repository for networking-sfc
  Pros
   - Can use existing horizon related project convention and knowledge
 (directory structure, testing, translation support)
   - Not related to the neutron stadium inclusion. Each project can
provide its dashboard
 support regardless of neutron stadium inclusion.
 Cons
   - An additional repository is needed.

(b) dashboard code in individual project
  example, dashboard module for networking-sfc
  Pros:
   - No additional repository
   - Not related to the neutron stadium inclusion. Each project can
provide its dashboard
 support regardless of neutron stadium inclusion.
 Cons:
   - Requires extra efforts to support neutron and horizon codes in a
single repository
 for testing and translation supports. Each project needs to
explore the way.

(c) a single dashboard repository for all neutron stadium projects
   (something like neutron-advanced-dashboard)
  Pros:
- No additional repository per project
  Each project do not need a basic setup for dashboard and
possible makes things simple.
  Cons:
- Inclusion criteria depending on the neutron stadium inclusion/exclusion
  (Similar discussion happens as for neutronclient OSC plugin)
  Project before neutron stadium inclusion may need another implementation.


My vote is (a) or (c) (to avoid mixing neutron and dashboard codes in a repo).

Note that as dashboard supports for feature in the main neutron repository
are implemented in the horizon repository as we discussed several months ago.
As an example, trunk support is being development in the horizon repo.

Thanks,
Akihiro

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Project Navigator Updates - Feedback Request

2017-03-24 Thread Tim Bell
Lauren,

Can we also update the sample configurations? We should certainly have Neutron 
now in the HTC (since nova-network deprecation)

Tim

From: Lauren Sell 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, 24 March 2017 at 17:57
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: [openstack-dev] Project Navigator Updates - Feedback Request

Hi everyone,

We’ve been talking for some time about updating the project navigator, and we 
have a draft ready to share for community feedback before we launch and 
publicize it. One of the big goals coming out of the joint TC/UC/Board meeting 
a few weeks ago[1] was to help better communicate ‘what is openstack?’ and this 
is one step in that direction.

A few goals in mind for the redesign:
- Represent all official, user-facing projects and deployment services in the 
navigator
- Better categorize the projects by function in a way that makes sense to 
prospective users (this may evolve over time as we work on mapping the 
OpenStack landscape)
- Help users understand which projects are mature and stable vs emerging
- Highlight popular project sets and sample configurations based on different 
use cases to help users get started

For a bit of context, we’re working to give each OpenStack official project a 
stronger platform as we think of OpenStack as a framework of composable 
infrastructure services that can be used individually or together as a powerful 
system. This includes the project mascots (so we in effect have logos to 
promote each component separately), updates to the project navigator, and 
bringing back the “project updates” track at the Summit to give each PTL/core 
team a chance to provide an update on their project roadmap (to be recorded and 
promoted in the project navigator among other places!).

We want your feedback on the project navigator v2 before it launches. Please 
take a look at the current version on the staging site and provide feedback on 
this thread.

http://devbranch.openstack.org/software/project-navigator/

Please review the overall concept and the data and description for your project 
specifically. The data is primarily pulled from TC tags[2] and Ops tags[3]. 
You’ll notice some projects have more information available than others for 
various reasons. That’s one reason we decided to downplay the maturity metric 
for now and the data on some pages is hidden. If you think your project is 
missing data, please check out the repositories and submit changes or again 
respond to this thread.

Also know this will continue to evolve and we are open to feedback. As I 
mentioned, a team that formed at the joint strategy session a few weeks ago is 
tackling how we map OpenStack projects, which may be reflected in the 
categories. And I suspect we’ll continue to build out additional tags and 
better data sources to be incorporated.

Thanks for your feedback and help.

Best,
Lauren

[1] 
http://superuser.openstack.org/articles/community-leadership-charts-course-openstack/
[2] https://governance.openstack.org/tc/reference/tags/
[3] https://wiki.openstack.org/wiki/Operations/Tags

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-22 Thread Tim Bell

> On 22 Mar 2017, at 00:53, Alex Schultz  wrote:
> 
> On Tue, Mar 21, 2017 at 5:35 PM, John Dickinson  wrote:
>> 
>> 
>> On 21 Mar 2017, at 15:34, Alex Schultz wrote:
>> 
>>> On Tue, Mar 21, 2017 at 3:45 PM, John Dickinson  wrote:
 I've been following this thread, but I must admit I seem to have missed 
 something.
 
 What problem is being solved by storing per-server service configuration 
 options in an external distributed CP system that is currently not 
 possible with the existing pattern of using local text files?
 
>>> 
>>> This effort is partially to help the path to containerization where we
>>> are delivering the service code via container but don't want to
>>> necessarily deliver the configuration in the same fashion.  It's about
>>> ease of configuration where moving service -> config files (on many
>>> hosts/containers) to service -> config via etcd (single source
>>> cluster).  It's also about an alternative to configuration management
>>> where today we have many tools handling the files in various ways
>>> (templates, from repo, via code providers) and trying to come to a
>>> more unified way of representing the configuration such that the end
>>> result is the same for every deployment tool.  All tools load configs
>>> into $place and services can be configured to talk to $place.  It
>>> should be noted that configuration files won't go away because many of
>>> the companion services still rely on them (rabbit/mysql/apache/etc) so
>>> we're really talking about services that currently use oslo.
>> 
>> Thanks for the explanation!
>> 
>> So in the future, you expect a node in a clustered OpenStack service to be 
>> deployed and run as a container, and then that node queries a centralized 
>> etcd (or other) k/v store to load config options. And other services running 
>> in the (container? cluster?) will load config from local text files managed 
>> in some other way.
> 
> No the goal is in the etcd mode, that it  may not be necessary to load
> the config files locally at all.  That being said there would still be
> support for having some configuration from a file and optionally
> provide a kv store as another config point.  'service --config-file
> /etc/service/service.conf --config-etcd proto://ip:port/slug'
> 
>> 
>> No wait. It's not the *services* that will load the config from a kv 
>> store--it's the config management system? So in the process of deploying a 
>> new container instance of a particular service, the deployment tool will 
>> pull the right values out of the kv system and inject those into the 
>> container, I'm guessing as a local text file that the service loads as 
>> normal?
>> 
> 
> No the thought is to have the services pull their configs from the kv
> store via oslo.config.  The point is hopefully to not require
> configuration files at all for containers.  The container would get
> where to pull it's configs from (ie. http://11.1.1.1:2730/magic/ or
> /etc/myconfigs/).  At that point it just becomes another place to load
> configurations from via oslo.config.  Configuration management comes
> in as a way to load the configs either as a file or into etcd.  Many
> operators (and deployment tools) are already using some form of
> configuration management so if we can integrate in a kv store output
> option, adoption becomes much easier than making everyone start from
> scratch.
> 
>> This means you could have some (OpenStack?) service for inventory management 
>> (like Karbor) that is seeding the kv store, the cloud infrastructure 
>> software itself is "cloud aware" and queries the central distributed kv 
>> system for the correct-right-now config options, and the cloud service 
>> itself gets all the benefits of dynamic scaling of available hardware 
>> resources. That's pretty cool. Add hardware to the inventory, the cloud 
>> infra itself expands to make it available. Hardware fails, and the cloud 
>> infra resizes to adjust. Apps running on the infra keep doing their thing 
>> consuming the resources. It's clouds all the way down :-)
>> 
>> Despite sounding pretty interesting, it also sounds like a lot of extra 
>> complexity. Maybe it's worth it. I don't know.
>> 
> 
> Yea there's extra complexity at least in the
> deployment/management/monitoring of the new service or maybe not.
> Keeping configuration files synced across 1000s of nodes (or
> containers) can be just as hard however.
> 

Would there be a mechanism to stage configuration changes (such as a 
QA/production environment) or have different configurations for different 
hypervisors?

We have some of our hypervisors set for high performance which needs a slightly 
different nova.conf (such as CPU passthrough).

Tim

>> Thanks again for the explanation.
>> 
>> 
>> --John
>> 
>> 
>> 
>> 
>>> 
>>> Thanks,
>>> -Alex
>>> 
 
 --John
 
 
 
 
 On 21 Mar 2017, at 14:26, Davanum Srinivas wrote:
 
> 

Re: [openstack-dev] [keystone][all] Reseller - do we need it?

2017-03-17 Thread Tim Bell
Lance,

I had understood that the resellers was about having users/groups at the 
different points in the tree.

I think the basic resource management is being looked at as part of the nested 
quotas functionality. For CERN, we’d look to delegate the quota and roles 
management but not support sub-tree user/groups.

Tim

From: Lance Bragstad 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday, 17 March 2017 at 00:23
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [keystone][all] Reseller - do we need it?


On Thu, Mar 16, 2017 at 5:54 PM, Fox, Kevin M 
> wrote:
At our site, we have some larger projects that would be really nice if we could 
just give a main project all the resources they need, and let them suballocate 
it as their own internal subprojects needs change. Right now, we have to deal 
with all the subprojects directly. The reseller concept may fit this use case?

Sounds like this might also be solved by better RBAC that allows real project 
administrators to control their own subtrees. Is there a use case to limit 
visibility either up or down the tree? If not, would it be a nice-to-have?


Thanks,
Kevin

From: Lance Bragstad [lbrags...@gmail.com]
Sent: Thursday, March 16, 2017 2:10 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [keystone][all] Reseller - do we need it?
Hey folks,

The reseller use case [0] has been popping up frequently in various discussions 
[1], including unified limits.

For those who are unfamiliar with the reseller concept, it came out of early 
discussions regarding hierarchical multi-tenancy (HMT). It essentially allows a 
certain level of opaqueness within project trees. This opaqueness would make it 
easier for providers to "resell" infrastructure, without having 
customers/providers see all the way up and down the project tree, hence it was 
termed reseller. Keystone originally had some ideas of how to implement this 
after the HMT implementation laid the ground work, but it was never finished.

With it popping back up in conversations, I'm looking for folks who are willing 
to represent the idea. Participating in this thread doesn't mean you're on the 
hook for implementing it or anything like that.

Are you interested in reseller and willing to provide use-cases?



[0] 
http://specs.openstack.org/openstack/keystone-specs/specs/keystone/mitaka/reseller.html#problem-description

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][appcat] The future of the App Catalog

2017-03-12 Thread Tim Bell

> On 11 Mar 2017, at 08:19, Clint Byrum  wrote:
> 
> Excerpts from Christopher Aedo's message of 2017-03-10 19:30:18 -0800:
>> On Fri, Mar 10, 2017 at 6:20 PM, Clint Byrum  wrote:
>>> Excerpts from Fox, Kevin M's message of 2017-03-10 23:45:06 +:
 So, this is the kind of thinking I'm talking about... OpenStack today is
 more then just IaaS in the tent. Trove (DBaaS), Sahara (Hadoop,Spark,etc
 aaS), Zaqar (Messaging aaS) and many more services. But they seem to be
 treated as second class citizens, as they are not "IaaS".
 
>>> 
>>> It's not that they're second class citizens. It's that their community
>>> is smaller by count of users, operators, and developers. This should not
>>> come as a surprise, because the lowest common denominator in any user
>>> base will always receive more attention.
>>> 
> Why should it strive to be anything except an excellent building block
 for other technologies?
 
 You misinterpret my statement. I'm in full agreement with you. The
 above services should be excellent building blocks too, but are suffering
 from lack of support from the IaaS layer. They deserve the ability to
 be excellent too, but need support/vision from the greater community
 that hasn't been forthcoming.
 
>>> 
>>> You say it like there's some over arching plan to suppress parts of the
>>> community and there's a pack of disgruntled developers who just can't
>>> seem to get OpenStack to work for Trove/Sahara/AppCatalog/etc.
>>> 
>>> We all have different reasons for contributing in the way we do.  Clearly,
>>> not as many people contribute to the Trove story as do the pure VM-on-nova
>>> story.
>>> 
 I agree with you, we should embrace the container folks and not treat
 them as separate. I think thats critical if we want to allow things
 like Sahara or Trove to really fulfil their potential. This is the path
 towards being an OpenSource AWS competitor, not just for being able to
 request vm's in a cloudy way.
 
 I think that looks something like:
 OpenStack Advanced Service (trove, sahara, etc) -> Kubernetes ->
 Nova VM or Ironic Bare Metal.
 
>>> 
>>> That's a great idea. However, AFAICT, Nova is _not_ standing in Trove,
>>> Sahara, or anyone else's way from doing this. Seriously, try it. I'm sure
>>> it will work.  And in so doing, you will undoubtedly run into friction
>>> from the APIs. But unless you can describe that _now_, you have to go try
>>> it and tell us what broke first. And then you can likely submit feature
>>> work to nova/neutron/cinder to make it better. I don't see anything in
>>> the current trajectory of OpenStack that makes this hard. Why not just do
>>> it? The way you ask, it's like you have a team of developers just sitting
>>> around shaving yaks waiting for an important OpenStack development task.
>>> 
>>> The real question is why aren't Murano, Trove and Sahara in most current
>>> deployments? My guess is that it's because most of our current users
>>> don't feel they need it. Until they do, Trove and Sahara will not be
>>> priorities. If you want them to be priorities _pay somebody to make them
>>> a priority_.
>> 
>> This particular point really caught my attention.  You imply that
>> these additional services are not widely deployed because _users_
>> don't want them.  The fact is most users are completely unaware of
>> them because these services require the operator of the cloud to
>> support them.  In fact they often require the operator of the cloud to
>> support them from the initial deployment, as these services (and
>> *most* OpenStack services) are frighteningly difficult to add to an
>> already deployed cloud without downtime and high risk of associated
>> issues.
>> 
>> I think it's unfair to claim these services are unpopular because
>> users aren't asking for them when it's likely users aren't even aware
>> of them (do OVH, Vexxhost, Dreamhost, Raskspace or others provide a
>> user-facing list of potential OpenStack services with a voting option?
>> Not that I've ever seen!)
>> 
>> I bring this up to point out how much more popular ALL of these
>> services would be if the _users_ were able to enable them without
>> requiring operator intervention and support.
>> 
>> Based on our current architecture, it's nearly impossible for a new
>> project to be deployed on a cloud without cloud-level admin
>> privileges.  Additionally almost none of the projects could even work
>> this way (with Rally being a notable exception).  I guess I'm kicking
>> this dead horse because for a long time I've argued we need to back
>> away from the tightly coupled nature of all the projects, but
>> (speaking of horses) it seems that horse is already out of the barn.
>> (I really wish I could work in one more proverb dealing with horses
>> but it's getting late on a Friday so I'll stop now.)
>> 
> 
> I see your point, and believe it is valid.
> 
> 

Re: [openstack-dev] [nova] Device tagging: rebuild config drive upon instance reboot to refresh metadata on it

2017-02-20 Thread Tim Bell
Is there cloud-init support for this mode or do we still need to mount as a 
config drive?

Tim

On 20.02.17, 17:50, "Jeremy Stanley"  wrote:

On 2017-02-20 15:46:43 + (+), Daniel P. Berrange wrote:
> The data is exposed either as a block device or as a character device
> in Linux - which one depends on how the NVDIMM is configured. Once
> opening the right device you can simply mmap() the FD and read the
> data. So exposing it as a file under sysfs doesn't really buy you
> anything better.

Oh! Fair enough, if you can already access it as a character device
then I agree that solves the use cases I was considering.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [chef] Making the Kitchen Great Again: A Retrospective on OpenStack & Chef

2017-02-16 Thread Tim Bell

On 16 Feb 2017, at 19:42, Fox, Kevin M 
> wrote:

+1. The assumption was market forces will cause the best OpenStack deployment 
tools to win. But the sad reality is, market forces are causing people to look 
for non OpenStack solutions instead as the pain is still too high.

While k8s has a few different deployment tools currently, they are focused on 
getting the small bit of underlying plumbing deployed. Then you use the common 
k8s itself to deploy the rest. Adding a dashboard, dns, ingress, sdn, other 
component is easy in that world.

IMO, OpenStack needs to do something similar. Standardize a small core and get 
that easily deployable, then make it easy to deploy/upgrade the rest of the big 
tent projects on top of that, not next to it as currently is being done.

Thanks,
Kevin

Unfortunately, the more operators and end users question the viability of a 
specific project, the less likely it is to be adopted.
It is a very very difficult discussion with an end user to explain that 
function X is no longer available because the latest OpenStack upgrade had to 
be done for security/functional/stability reasons and this project/function is 
not available.
The availability of a function may also have been one of the positives for the 
OpenStack selection so finding a release or two later that it is no longer in 
the portfolio is difficult.
The deprecation policy really helps so we can give a good notice but this 
assumes an equivalent function is available. For example, the built in Nova EC2 
to EC2 project was an example where we had enough notice to test the new 
solution in parallel and then move with minimum disruption.  Moving an entire 
data centre from Chef to Puppet or running a parallel toolchain, for example, 
has a high cost.
Given the massive functionality increase in other clouds, It will be tough to 
limit the OpenStack offering to the small core. However, expanding with 
unsustainable projects is also not attractive.
Tim


From: Joshua Harlow [harlo...@fastmail.com]
Sent: Thursday, February 16, 2017 10:24 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [chef] Making the Kitchen Great Again: A 
Retrospective on OpenStack & Chef

Alex Schultz wrote:
On Thu, Feb 16, 2017 at 9:12 AM, Ed 
Leafe>  wrote:
On Feb 16, 2017, at 10:07 AM, Doug 
Hellmann>  wrote:

When we signed off on the Big Tent changes we said competition
between projects was desirable, and that deployers and contributors
would make choices based on the work being done in those competing
projects. Basically, the market would decide on the "optimal"
solution. It's a hard message to hear, but that seems to be what
is happening.
This.

We got much better at adding new things to OpenStack. We need to get better at 
letting go of old things.

-- Ed Leafe




I agree that the market will dictate what continues to survive, but if
you're not careful you may be speeding up the decline as the end user
(deployer/operator/cloud consumer) will switch completely to something
else because it becomes to difficult to continue to consume via what
used to be there and no longer is.  I thought the whole point was to
not have vendor lock-in.  Honestly I think the focus is too much on
the development and not enough on the consumption of the development
output.  What are the point of all these features if no one can
actually consume them.


+1 to that.

I've been in the boat of development and consumption of it for my
*whole* journey in openstack land and I can say the product as a whole
seems 'underbaked' with regards to the way people consume the
development output. It seems we have focused on how to do the dev. stuff
nicely and a nice process there, but sort of forgotten about all that
being quite useless if no one can consume them (without going through
much pain or paying a vendor).

This has or has IMHO been a factor in why certain are companies (and the
people they support) are exiting openstack and just going elsewhere.

I personally don't believe fixing this is 'let the market forces' figure
it out for us (what a slow & horrible way to let this play out; I'd
almost rather go pull my fingernails out). I do believe it will require
making opinionated decisions which we have all never been very good at.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] Hierarchical quotas at the PTG?

2017-02-12 Thread Tim Bell

On 12 Feb 2017, at 12:13, Boris Bobrov 
> wrote:

I would like to talk about it too.

On 02/10/2017 11:56 PM, Matt Riedemann wrote:
Operators want hierarchical quotas [1]. Nova doesn't have them yet and
we've been hesitant to invest scarce developer resources in them since
we've heard that the implementation for hierarchical quotas in Cinder
has some issues. But it's unclear to some (at least me) what those
issues are.

I don't know what the actual issue is, but from from keystone POV
the issue is that it basically replicates project tree that is stored
in keystone. On top of usual replication issues, there is another one --
it requires too many permissions. Basically, it requires service user
to be cloud admin.

I have not closely followed the cinder implementation since the CERN and BARC 
Mumbai focus has more around Nova.

The various feedbacks I have had was regarding how to handle overcommit on the 
cinder proposal. A significant share of the operator community would like to 
allow

- No overcommit for the ‘top level’ project (i.e. you can’t use more than you 
are allocated)]
- Sub project over commit is OK (i.e. promising your sub projects more is OK, 
sum of the commitment to subprojects>project is OK but should be given an error 
if it actually happens)



Has anyone already planned on talking about hierarchical quotas at the
PTG, like the architecture work group?

I know there was a bunch of razzle dazzle before the Austin summit about
quotas, but I have no idea what any of that led to. Is there still a
group working on that and can provide some guidance here?

In my opinion, projects should not re-implements quotas every time.
I would like to have a common library for enforcing quotas (usages)
and a service for storing quotas (limits). We should also think of a
way to transfer necessary projects subtree from keystone to quota
enforcer.

We could store quota limits in keystone and distribute it in token
body, for example. Here is a POC that we did some time ago --
https://review.openstack.org/#/c/403588/ and
https://review.openstack.org/#/c/391072/
But it still has the issue with permissions.


There has been an extended discussion since the Boson proposal at the Hong Kong 
summit on how to handle quotas, where a full quota service was proposed.

A number of ideas have emerged since then

- Quota limits stored in Keystone with the project data
- An oslo library to support checking that a resource request would be OK

One Forum session at the summit is due to be on this topic.

Some of the academic use cases are described in 
https://openstack-in-production.blogspot.fr/2016/04/resource-management-at-cern.html
 but commercial reseller models are valid here where

- company A has valuable resources to re-sell (e.g. flood risk and associated 
models)
- company B signs an agreement with Company A (e.g. an insurance company wants 
to use flood risk data as factor in their cost models)

The natural way of delivering this is that ‘A’ gives a pricing model based on 
‘B’’s consumption of compute and storage resources.

Tim



[1]
http://lists.openstack.org/pipermail/openstack-operators/2017-January/012450.html



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][glance][glare][all] glance/glare/artifacts/images at the PTG

2017-02-12 Thread Tim Bell
Although there has not been much discussion on this point on the mailing list, 
I feel we do need to find the right level of granularity for ‘mainstream’ 
projects:

For CERN, we look for the following before offering a project to our end users:

- Distro packaging (in our case RPMs through RDO)
- Puppet modules
- Openstack client support (which brings Kerberos/X.509 authentication)
- Install, admin and user docs
- Project diversity for long term sustainability

We have many use cases of ‘resellers’ where one project provides a deliverable 
for others to consume, some degree of community image sharing is arriving and 
these are the same problems to face for artefacts and application catalogues 
(such as Heat and Magnum).

For me, which project provides this for images and/or artefacts is a choice for 
the technical community but consistent semantics would be greatly appreciated 
for those discussions with our end users such as “I need a Heat template for X 
but this needs community image Y and the visibility rules means that one needs 
to be shared in advance, the other I need to subscribe to” are difficult 
discussion which discourages uptake.

A cloud user should be able to click on community offered ‘R-as-a-Service’ in 
the application catalog GUI, and that’s all.

Tim

On 10.02.17, 18:39, "Brian Rosmaita"  wrote:

I want to give all interested parties a heads up that I have scheduled a
session in the Macon room from 9:30-10:30 a.m. on Thursday morning
(February 23).

Here's what we need to discuss.  This is from my perspective as Glance
PTL, so it's going to be Glance-centric.  This is a quick narrative
description; please go to the session etherpad [0] to turn this into a
specific set of discussion items.

Glance is the OpenStack image cataloging and delivery service.  A few
cycles ago (Juno?), someone noticed that maybe Glance could be
generalized so that instead of storing image metadata and image data,
Glance could store arbitrary digital "stuff" along with metadata
describing the "stuff".  Some people (like me) thought that this was an
obvious direction for Glance to take, but others (maybe wiser, cooler
heads) thought that Glance needed to focus on image cataloging and
delivery and make sure it did a good job at that.  Anyway, the Glance
mission statement was changed to include artifacts, but the Glance
community never embraced them 100%, and in Newton, Glare split off as
its own project (which made sense to me, there was too much unclarity in
Glance about how Glare fit in, and we were holding back development, and
besides we needed to focus on images), and the Glance mission statement
was re-amended specifically to exclude artifacts and focus on images and
metadata definitions.

OK, so the current situation is:
- Glance "does" image cataloging and delivery and metadefs, and that's
all it does.
- Glare is an artifacts service (cataloging and delivery) that can also
handle images.

You can see that there's quite a bit of overlap.  I gave you the history
earlier because we did try to work as a single project, but it did not
work out.

So, now we are in 2017.  The OpenStack development situation has been
fragile since the second half of 2016, with several big OpenStack
sponsors pulling way back on the amount of development resources being
contributed to the community.  This has left Glare in the position where
it cannot qualify as a Bit Tent project, even though there is interest
in artifacts.

Mike Fedosin, the PTL for Glare, has asked me about Glare becoming part
of the Glance project again.  I will be completely honest, I am inclined
to say "no".  I have enough problems just getting Glance stuff done (for
example, image import missed Ocata).  But in addition to doing what's
right for Glance, I want to do what's right for OpenStack.  And I look
at the overlap and think ...

Well, what I think is that I don't want to go through the Juno-Newton
cycles of argument again.  And we have to do what is right for our users.

The point of this session is to discuss:
- What does the Glance community see as the future of Glance?
- What does the wider OpenStack community (TC) see as the future of Glance?
- Maybe, more importantly, what does the wider community see as the
obligations of Glance?
- Does Glare fit into this vision?
- What kind of community support is there for Glare?

My reading of Glance history is that while some people were on board
with artifacts as the future of Glance, there was not a sufficient
critical mass of the Glance community that endorsed this direction and
that's why things unravelled in Newton.  I don't want to see that happen
again.  Further, I don't think the Glance community got the word out to
 

Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-17 Thread Tim Bell

On 17 Jan 2017, at 11:28, Maish Saidel-Keesing 
<mais...@maishsk.com<mailto:mais...@maishsk.com>> wrote:


Please see inline.

On 17/01/17 9:36, Tim Bell wrote:

...
Are we really talking about Barbican or has the conversation drifted towards 
Big Tent concerns?

Perhaps we can flip this thread on it’s head and more positively discuss what 
can be done to improve Barbican, or ways that we can collaboratively address 
any issues. I’m almost wondering if some opinions about Barbican are even 
coming from its heavy users, or users who’ve placed much time into 
developing/improving Barbican? If not, let’s collectively change that.


When we started deploying Magnum, there was a pre-req for Barbican to store the 
container engine secrets. We were not so enthusiastic since there was no puppet 
configuration or RPM packaging.  However, with a few upstream contributions, 
these are now all resolved.

the operator documentation has improved, HA deployment is working and the 
unified openstack client support is now available in the latest versions.
Tim - where exactly is this documentation?

We followed the doc for installation at 
http://docs.openstack.org/project-install-guide/newton/, specifically for our 
environment (RDO/CentOS) 
http://docs.openstack.org/project-install-guide/key-manager/newton/

Tim


These extra parts may not be a direct deliverable of the code contributions 
itself but they make a major difference on deployability which Barbican now 
satisfies. Big tent projects should aim to cover these areas also if they wish 
to thrive in the community.

Tim


Thanks,
Kevin


Brandon B. Jozsa

--
Best Regards,
Maish Saidel-Keesing
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org<mailto:openstack-dev-requ...@lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [barbican] [security] Why are projects trying to avoid Barbican, still?

2017-01-16 Thread Tim Bell

On 17 Jan 2017, at 01:19, Brandon B. Jozsa 
> wrote:

Inline


On January 16, 2017 at 7:04:00 PM, Fox, Kevin M 
(kevin@pnnl.gov) wrote:

I'm not stating that the big tent should be abolished and we go back to the way 
things were. But I also know the status quo is not working either. How do we 
fix this? Anyone have any thoughts?


Are we really talking about Barbican or has the conversation drifted towards 
Big Tent concerns?

Perhaps we can flip this thread on it’s head and more positively discuss what 
can be done to improve Barbican, or ways that we can collaboratively address 
any issues. I’m almost wondering if some opinions about Barbican are even 
coming from its heavy users, or users who’ve placed much time into 
developing/improving Barbican? If not, let’s collectively change that.


When we started deploying Magnum, there was a pre-req for Barbican to store the 
container engine secrets. We were not so enthusiastic since there was no puppet 
configuration or RPM packaging.  However, with a few upstream contributions, 
these are now all resolved.

the operator documentation has improved, HA deployment is working and the 
unified openstack client support is now available in the latest versions.

These extra parts may not be a direct deliverable of the code contributions 
itself but they make a major difference on deployability which Barbican now 
satisfies. Big tent projects should aim to cover these areas also if they wish 
to thrive in the community.

Tim


Thanks,
Kevin


Brandon B. Jozsa

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 答复: [heat] glance v2 support?

2017-01-10 Thread Tim Bell

On 10 Jan 2017, at 17:41, Zane Bitter 
> wrote:

On 10/01/17 05:25, Flavio Percoco wrote:


I'd recommend Heat to not use locations as that will require deployers
to either enable them for everyone or have a dedicate glance-api node
for Heat.
If not use location, do we have other options for user? What
should user to do before create a glance image using v2? Download the
image data? And then pass the image data to glance api? I really don't
think it's good way.


That *IS* how users create images. There used to be copy-from too (which
may or
may not come back).

Heat's use case is different and I understand that but as I said in my
other
email, I do not think sticking to v1 is the right approach. I'd rather
move on
with a deprecation path or compatibility layer.

"Backwards-compatibility" is a wide-ranging topic, so let's break this down 
into 3 more specific questions:

1) What is an interface that we could support with the v2 API?

- If copy-from is not a thing then it sounds to me like the answer is "none"? 
We are not ever going to support uploading a multi-GB image file through Heat 
and from there to Glance.
- We could have an Image resource that creates a Glance image from a volume. 
It's debatable how useful this would be in an orchestration setting (i.e. in 
most cases this would have to be part of a larger workflow anyway), but there 
are some conceivable uses I guess. Given that this is completely disjoint from 
what the current resource type does, we'd make it easier on everyone if we just 
gave it a new name.

2) How can we avoid breaking existing stacks that use Image resources?

- If we're not replacing it with anything, then we can just mark the resource 
type as first Deprecated, and then Hidden and switch the back end to use the v2 
API for things like deleting. As long as nobody attempts to replace the image 
then the rest of the stack should continue to work fine.


Can we only deprecate the resources using the location function but maintain 
backwards compatibility if the location function is not used?

3) How do we handle existing templates in future?

- Again, if we're not replacing it with anything, the -> Deprecated -> Hidden 
process is sufficient. (In theory "Hidden" should mean you can't create new 
stacks containing that resource type any more, only continue using existing 
stacks that contained it. In practice, we didn't actually implement that and it 
just gets hidden from the documentation. Obviously trying to create a new one 
using the location field once only the v2 API is available will result in an 
error.)


My worry is that portable heat templates like the Community App Catalog ( 
http://apps.openstack.org/#tab=heat-templates) would become much more complex 
if we have to produce different resources for Glance V1 and V2 configurations. 
If, however, we are able to say that the following definitions of image 
resources are compatible across the two configurations, this can be more 
supportive of a catalog approach and improve template portability.

Tim


If we have a different answer to (1) then that could change the answers to (2) 
and (3).

cheers,
Zane.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] [ceilometer] [panko] ceilometer API deprecation

2017-01-10 Thread Tim Bell

On 10 Jan 2017, at 15:21, gordon chung > 
wrote:



On 10/01/17 07:27 AM, Julien Danjou wrote:
On Mon, Jan 09 2017, William M Edmonds wrote:

I started the conversation on IRC [5], but wanted to send this to the
mailing list and see if others have thoughts/concerns here and figure out
what we should do about this going forward.

Nothing? The code has not been removed, it has been moved to a new
project. Ocata will be the second release for Panko, so if user did not
switch already during Newton, they'll have to do it for Ocata. That's a
lot of overlap. Two cycles to switch to a "new" service should be enough.

well it's not actually two. it'd just be the one cycle in Newton since
it's gone in Ocata. :P

that said, for me, the move to remove it is to avoid any needless
additional work of maintaining two active codebases. we're a small team
so it's work we don't have time for.

as i mentioned in chat, i'm ok with reverting patch and leaving it for
Ocata but if the transition is clean (similiar to how aodh was split)
i'd rather not waste resources on maintaining residual 'dead' code.

cheers,
--
gord


What’s also good is that Panko has equivalent function for Puppet and RPMs:

- https://github.com/openstack/puppet-panko
- https://www.rdoproject.org/documentation/package-list/

In the past, these equivalent functions have sometimes lagged the code release 
so it’s great to see the additional functions there.

Tim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [heat] glance v2 support?

2017-01-05 Thread Tim Bell

On 6 Jan 2017, at 05:04, Rabi Mishra 
> wrote:

On Fri, Jan 6, 2017 at 4:38 AM, Emilien Macchi 
> wrote:
Greetings Heat folks!

My question is simple:
When do you plan to support Glance v2?
https://review.openstack.org/#/c/240450/

The spec looks staled while Glance v1 was deprecated in Newton (and v2
was started in Kilo!).


Hi Emilien,

I think we've not been able to move to v2 due to v1/v2 incompatibility[1] with 
respect to the location[2] property. Moving to v2 would break all existing 
templates using that property.

I've seen several discussions around that without any conclusion.  I think we 
can support a separate v2 image resource and deprecate the current one, unless 
there is a better path available.


[1] https://wiki.openstack.org/wiki/Glance-v2-v1-client-compatability
[2] 
https://github.com/openstack/heat/blob/master/heat/engine/resources/openstack/glance/image.py#L107-L112


Would this be backwards compatible (i.e. the old image resource would still 
work without taking advantage of the new functions) or would heat users have to 
change their templates?

It would be good if there is a way to minimise the user impact.

Tim


As an user, I need Glance v2 support so I can remove Glance Registry
from my deployment. and run pure v2 everywhere in my cloud.

Thanks for your help,
--
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Regards,
Rabi Misra

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptls][tc][goals] community goals for Pike

2016-12-15 Thread Tim Bell

> On 15 Dec 2016, at 16:19, Doug Hellmann  wrote:
> 
> Excerpts from Jean-Philippe Evrard's message of 2016-12-15 09:11:12 +:
>> Hello,
>> 
>> Maybe this will sound dumb …
>> 
>> I received this email on openstack-dev mailing list. I don’t know if it was 
>> sent to any other place, because it’s basically agreeing on development to 
>> be done, which makes sense to me.
>> So openstack-dev people (called further “devs”) will push their company 
>> agenda on these goals based on what they know in their company. I see the 
>> work done together there, and I find it great, but…
>> 
>> Wouldn’t that be better if we open this discussion to the general population 
>> (openstack users, operators, and devs) instead of just devs?
>> I submit this question because what I see on 
>> https://etherpad.openstack.org/p/community-goals is not only tech debt items 
>> that we have to fix, but also ideas of improvement on the long run for our 
>> users.
> 
> Excellent point.
> 
> Collecting enough information to be confident that we are choosing
> goals that are important to the whole project while still being
> actionable is going to be a challenge. I expect us to get better
> at it over time, but this is only the second time we've tried.
> 
> Emilien has also started talking with the Product Working group
> [1], which is chartered by the board to collect this sort of feedback.
> 

I think using the product WG for this selection is a very welcome approach and 
the sort of initiative that we were aiming for with the consolidation of needs 
to evangelise. It ensures the operator/user proposals have had a reasonable 
scrutiny before being considered as the community goals. With many of product 
working group members being employed by companies contributing significantly to 
OpenStack, it is also more likely to resonate with the development resource 
allocations and strategy.

High quality user stories greatly help the drafting of appropriate blueprints 
since the multiple perspectives are consolidated into a small number of 
problems to solve.


> …
> It's up to us to ensure that operators and users understand the
> importance of those technical debt items so we can all prioritize them
> together.

+1 - the improvements will help reduce operations effort in the long term. 

Tim

> 
>> ...
> 
> Doug
> 
>> 
>> Best regards,
>> Jean-Philippe Evrard
>> 
>> 
>> On 12/12/2016, 12:19, "Emilien Macchi"  wrote:
>> 
>>On Tue, Nov 29, 2016 at 7:39 PM, Emilien Macchi  
>> wrote:
>>> A few months ago, our community started to find and work on
>>> OpenStack-wide goals to "achieve visible common changes, push for
>>> basic levels of consistency and user experience, and efficiently
>>> improve certain areas where technical debt payments have become too
>>> high – across all OpenStack projects".
>>> 
>>> http://governance.openstack.org/goals/index.html
>>> 
>>> We started to define a first Goal in Ocata (Remove Copies of Incubated
>>> Oslo Code) and we would like to move forward in Pike.
>>> I see 3 actions we could take now:
>>> 
>>> 1) Collect feedback of our first iteration of Community Goals in
>>> OpenStack during Ocata. What went well? What was more challenging?
>>> 
>>> Some examples:
>>> - should we move the goal documents into a separate repo to allow a
>>> shorter review time, where we could just have 2 TC members approve
>>> them instead of waiting a week?
>>> -  we expected all teams to respond to all goals, even if they have no
>>> work to do. Should we continue that way?
>>> - should we improve the guidance to achieve Goals?
>>> 
>>> I created an etherpad if folks want to give feedback:
>>> https://etherpad.openstack.org/p/community-goals-ocata-feedback
>>> 
>>> 2) Goals backlog - https://etherpad.openstack.org/p/community-goals
>>> - new Goals are highly welcome.
>>> - each Goal would be achievable in one cycle, if not I think we need
>>> to break it down into separated Goals (with connections).
>>> - some Goals already have a team (ex: Python 3) but some haven't.
>>> Maybe could we dress a list of people able to step-up and volunteer to
>>> help on these ones.
>>> - some Goals might require some documentation for how to achieve it.
>>> 
>>> I think for now 2) can be discussed on the etherpad, though feel free
>>> to propose another channel.
>>> 
>>> 3) Choose Goals for Pike.
>>> Some of us already did, but we might want to start looking at what
>>> Goals we would like to achieve during Pike cycle.
>>> I was thinking at giving a score to the Goals, that could be
>>> calculated by its priority (I know it's vague but we know what is
>>> really urgent for us versus what can wait 6 months); but also the
>>> number of people who are interested to contribute on a Goal (if this
>>> Goal doesn't have a team yet).
>>> For now, openstack/governance is the repository for Goals, please
>>> propose them here.
>>> 
>>> 
>>> Please give feedback, we're doing iterations 

Re: [openstack-dev] [Openstack-operators] [puppet][fuel][packstack][tripleo] puppet 3 end of life

2016-11-04 Thread Tim Bell

On 4 Nov 2016, at 06:31, Sam Morrison 
> wrote:


On 4 Nov. 2016, at 1:33 pm, Emilien Macchi 
> wrote:

On Thu, Nov 3, 2016 at 9:10 PM, Sam Morrison 
> wrote:
Wow I didn’t realise puppet3 was being deprecated, is anyone actually using 
puppet4?

I would hope that the openstack puppet modules would support puppet3 for a 
while still, at lest until the next ubuntu LTS is out else we would get to the 
stage where the openstack  release supports Xenial but the corresponding puppet 
module would not? (Xenial has puppet3)

I'm afraid we made a lot of communications around it but you might
have missed it, no problem.
I have 3 questions for you:
- for what reasons would you not upgrade puppet?

Because I’m a time poor operator with more important stuff to upgrade :-)
Upgrading puppet *could* be a big task and something we haven’t had time to 
look into. Don’t follow along with puppetlabs so didn’t realise puppet3 was 
being deprecated. Now that this has come to my attention we’ll look into it for 
sure.

- would it be possible for you to use puppetlabs packaging if you need
puppet4 on Xenial? (that's what upstream CI is using, and it works
quite well).

OK thats promising, good to know that the CI is using puppet4. It’s all my 
other dodgy puppet code I’m worried about.

- what version of the modules do you deploy? (and therefore what
version of OpenStack)

We’re using a mixture of newton/mitaka/liberty/kilo, sometimes the puppet 
module version is newer than the openstack version too depending on where we’re 
at in the upgrade process of the particular openstack project.

I understand progress must go on, I am interested though in how many operators 
use puppet4. We may be in the minority and then I’ll be quiet :-)

Maybe it should be deprecated in one release and then dropped in the next?


We’re running Puppet 3 at the moment with around 25,000 hosts. There is ongoing 
work testing Puppet 4 but it takes some time to make sure that the results are 
the same. The performance is looking promising.

I think we’ll be done with the migration by the time we get to Ocata (currently 
between Liberty and Mitaka)

Tim


Cheers,
Sam






My guess is that this would also be the case for RedHat and other distros too.

Fedora is shipping Puppet 4 and we're going to do the same for Red Hat
and CentOS7.

Thoughts?



On 4 Nov. 2016, at 2:58 am, Alex Schultz 
> wrote:

Hey everyone,

Puppet 3 is reaching it's end of life at the end of this year[0].
Because of this we are planning on dropping official puppet 3 support
as part of the Ocata cycle.  While we currently are not planning on
doing any large scale conversion of code over to puppet 4 only syntax,
we may allow some minor things in that could break backwards
compatibility.  Based on feedback we've received, it seems that most
people who may still be using puppet 3 are using older (< Newton)
versions of the modules.  These modules will continue to be puppet 3.x
compatible but we're using Ocata as the version where Puppet 4 should
be the target version.

If anyone has any concerns or issues around this, please let us know.

Thanks,
-Alex

[0] https://puppet.com/misc/puppet-enterprise-lifecycle

___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



--
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] Deprecating the Ceilometer API

2016-10-04 Thread Tim Bell

> On 4 Oct 2016, at 17:36, Chris Dent  wrote:
> 
> On Tue, 4 Oct 2016, Julien Danjou wrote:
> 
>> Considering the split of Ceilometer in subprojects (Aodh and Panko)
>> during those last cycles, and the increasing usage of Gnocchi, I am
>> starting to wonder if it makes sense to maintain the legacy Ceilometer
>> API.
> 
> No surprise, as I've been saying this for a long time, but yeah, I think
> the API and storage should be deprecated. I think at some of the mid-
> cycles and summits we've had discussions about the parts of ceilometer
> that should be preserved, including any pollsters for which a
> notification does not or cannot exist.
> 
> The data gathering (and to some extent transforming) parts are the only
> parts of ceilometer that are particularly unique so it would be good
> for those to be preserved.
> 
> 

What would be the impact for Heat users who are using the Ceilometer scaling in 
their templates?

Tim

> -- 
> Chris Dent   ┬─┬ノ( º _ ºノ)https://anticdent.org/
> freenode: cdent tw: 
> @anticdent__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] ops meetup feedback

2016-09-20 Thread Tim Bell

On 20 Sep 2016, at 16:38, Sean Dague > 
wrote:
...
There were also general questions about what scale cells should be
considered at.

ACTION: we should make sure workarounds are advertised better
ACTION: we should have some document about "when cells"?

This is a difficult question to answer because "it depends." It's akin
to asking "how many nova-api/nova-conductor processes should I run?"
Well, what hardware is being used, how much traffic do you get, is it
bursty or sustained, are instances created and left alone or are they
torn down regularly, do you prune your database, what version of rabbit
are you using, etc...

I would expect the best answer(s) to this question are going to come
from the operators themselves. What I've seen with cellsv1 is that
someone will decide for themselves that they should put no more than X
computes in a cell and that information filters out to other operators.
That provides a starting point for a new deployment to tune from.

I don't think we need "don't go larger than N nodes" kind of advice. But
we should probably know what kinds of things we expect to be hot spots.
Like mysql load, possibly indicated by system load or high level of db
conflicts. Or rabbit mq load. Or something along those lines.

Basically the things to look out for that indicate your are approaching
a scale point where cells is going to help. That also helps in defining
what kind of scaling issues cells won't help on, which need to be
addressed in other ways (such as optimizations).

-Sean


We had an ‘interesting' experience splitting a cell which I would not recommend 
for others.

We started off letting our cells grow to about 1000 hypervisors but following 
discussions in the
large deployment team, ended up aiming for 200 or so per cell. This also 
allowed us to make the
hardware homogeneous in a cell.

We then split the original 1000 hypervisor cell into smaller ones which was 
hard work to plan.

Thus, I think people who think they may need cells are better adding new cells 
than letting their first one
grow until they are forced to do cells at a later stage and then do a split.

Tim

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [puppet] Puppet OpenStack PTL non-candidacy

2016-09-09 Thread Tim Bell


Emilien,

Thanks for all your hard work in this area. 

You may not know how many happy consumers of the OpenStack Puppet project there 
are because we’re quiet until you decide to step down.

Tim

On 09/09/16 18:05, "Emilien Macchi"  wrote:

Hi,

I wrote a little blog post about the last cycle in PuppetOpenStack:
http://my1.fr/blog/puppet-openstack-achievements-during-newton-cycle/

I can't describe how much I liked to be PTL during the last 18 months
and I wouldn't imagine we would be where we are today when I started
to contribute on this project.
Working on it is something I really enjoy because we have interactions
with all OpenStack community and I can't live without it.

However, I think it's time to pass the PTL torch for Ocata cycle.
Don't worry, I'll still be around and bother you when CI is broken ;-)

Again, a big thank you for those who work with me,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OSC] Tenant Resource Cleanup

2016-09-07 Thread Tim Bell

On 07 Sep 2016, at 15:05, John Davidge 
> wrote:

Hello,

During the Mitaka cycle we merged a new feature into the python-neutronclient 
called ’neutron purge’. This enables a simple CLI command that deletes all of 
the neutron resources owned by a given tenant. It’s documented in the 
networking guide[1].

We did this in response to feedback from operators that they needed a better 
way to remove orphaned resources after a tenant had been deleted. So far this 
feature has been well received, and we already have a couple of enhancement 
requests. Given that we’re moving to OSC I’m hesitant to continue iterating on 
this in the neutron client, and so I’m reaching out to propose that we look 
into making this a part of OSC.

Earlier this week I was about to file a BP, when I noticed one covering this 
subject was already filed last month[2]. I’ve spoken to Roman, who says that 
they’ve been thinking about implementing this in nova, and have come to the 
same conclusion that it would fit better in OSC.


This would be really great. From experience of using the existing purge 
commands (such as for deleted volumes), would it be possible to add a dry run 
option where it would list the deletions that it would do but not do them. This 
would allow the operator to check what is due to be cleaned up.

One other area where there have sometimes been problems is when lots of items 
need to be deleted. Some purge commands add a max resources or similar so that 
you can do it in smaller steps and avoid a timeout.

Tim


I would propose that we work together to establish how this command will behave 
in OSC, and build a framework that implements the cleanup of a small set of 
core resources. This should be achievable during the Ocata cycle. After that, 
we can reach out to the wider community to encourage a cross-project effort to 
incrementally support more projects/resources over time.

If you already have an etherpad for planning summit sessions then please let me 
know, I’d love to get involved.

Thanks,

John

[1] http://docs.openstack.org/mitaka/networking-guide/ops-resource-purge.html
[2] 
https://blueprints.launchpad.net/python-openstackclient/+spec/tenant-data-scrub


Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at 
www.rackspace.co.uk/legal/privacy-policy
 - This e-mail message may contain confidential or privileged information 
intended for the recipient. Any dissemination, distribution or copying of the 
enclosed material is prohibited. If you receive this transmission in error, 
please notify us immediately by e-mail at 
ab...@rackspace.com and delete the original 
message. Your cooperation is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OSC] Tenant Resource Cleanup

2016-09-07 Thread Tim Bell

On 07 Sep 2016, at 15:05, John Davidge 
> wrote:

Hello,

During the Mitaka cycle we merged a new feature into the python-neutronclient 
called ’neutron purge’. This enables a simple CLI command that deletes all of 
the neutron resources owned by a given tenant. It’s documented in the 
networking guide[1].

We did this in response to feedback from operators that they needed a better 
way to remove orphaned resources after a tenant had been deleted. So far this 
feature has been well received, and we already have a couple of enhancement 
requests. Given that we’re moving to OSC I’m hesitant to continue iterating on 
this in the neutron client, and so I’m reaching out to propose that we look 
into making this a part of OSC.

Earlier this week I was about to file a BP, when I noticed one covering this 
subject was already filed last month[2]. I’ve spoken to Roman, who says that 
they’ve been thinking about implementing this in nova, and have come to the 
same conclusion that it would fit better in OSC.


This would be really great. From experience of using the existing purge 
commands (such as for deleted volumes), would it be possible to add a dry run 
option where it would list the deletions that it would do but not do them. This 
would allow the operator to check what is due to be cleaned up.

One other area where there have sometimes been problems is when lots of items 
need to be deleted. Some purge commands add a max resources or similar so that 
you can do it in smaller steps and avoid a timeout.

Tim


I would propose that we work together to establish how this command will behave 
in OSC, and build a framework that implements the cleanup of a small set of 
core resources. This should be achievable during the Ocata cycle. After that, 
we can reach out to the wider community to encourage a cross-project effort to 
incrementally support more projects/resources over time.

If you already have an etherpad for planning summit sessions then please let me 
know, I’d love to get involved.

Thanks,

John

[1] http://docs.openstack.org/mitaka/networking-guide/ops-resource-purge.html
[2] 
https://blueprints.launchpad.net/python-openstackclient/+spec/tenant-data-scrub


Rackspace Limited is a company registered in England & Wales (company 
registered number 03897010) whose registered office is at 5 Millington Road, 
Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be 
viewed at 
www.rackspace.co.uk/legal/privacy-policy
 - This e-mail message may contain confidential or privileged information 
intended for the recipient. Any dissemination, distribution or copying of the 
enclosed material is prohibited. If you receive this transmission in error, 
please notify us immediately by e-mail at 
ab...@rackspace.com and delete the original 
message. Your cooperation is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][OpenStackClient] two openstack commands for the same operation?

2016-08-29 Thread Tim Bell

> On 29 Aug 2016, at 16:41, Loo, Ruby  wrote:
> 
> Hi,
> 
> In ironic, we have these ironic CLI commands:
> - ironic node-vendor-passthru (calls the specified passthru method)
> - ironic node-get-vendor-passthru-methods (lists the available passthru 
> methods)
> 
> For their corresponding openstackclient plugin commands, we (I, I guess) have 
> proposed [1]:
> - openstack baremetal node passthrough call
> - openstack baremetal node passthrough list
> 
> I did this because 'passthrough' is more English than 'passthru' and I 
> thought that was the 'way to go' in osc. But some folks wanted it to be 
> 'passthru' because in ironic, we've been calling them 'passthru' since day 2. 
> To make everyone happy, I also proposed (as aliases):
> 
> - openstack baremetal node passthru call
> - openstack baremetal node passthru list
> 

flavor has set the precedent of using americanised spellings (sorry, 
americanized). The native English are used to IT terms with american spellings.

So, I’d suggest only doing passthru and dropping the English alias would be OK.

Tim

> Unfortunately, I wasn't able to make everyone happy because someone else 
> thinks that we shouldn't be providing two different openstack commands that 
> provide the same functionality. (They're fine with either one, just not both.)
> 
> What do the rest of the folks think? Some guidance from the OpenStackClient 
> folks would be greatly appreciated.
> 
> --ruby
> 
> 
> [1] 
> http://specs.openstack.org/openstack/ironic-specs/specs/approved/ironicclient-osc-plugin.html
> 
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [Nova] Reconciling flavors and block device mappings

2016-08-26 Thread Tim Bell

On 26 Aug 2016, at 17:44, Andrew Laski 
> wrote:




On Fri, Aug 26, 2016, at 11:01 AM, John Griffith wrote:


On Fri, Aug 26, 2016 at 7:37 AM, Andrew Laski 
> wrote:


On Fri, Aug 26, 2016, at 03:44 
AM,kostiantyn.volenbovs...@swisscom.com
wrote:
> Hi,
> option 1 (=that's what patches suggest) sounds totally fine.
> Option 3 > Allow block device mappings, when present, to mostly determine
> instance  packing
> sounds like option 1+additional logic (=keyword 'mostly')
> I think I miss to understand the part of 'undermining the purpose of the
> flavor'
> Why new behavior might require one more parameter to limit number of
> instances of host?
> Isn't it that those VMs will be under control of other flavor
> constraints, such as CPU and RAM anyway and those will be the ones
> controlling 'instance packing'?

Yes it is possible that CPU and RAM could be controlling instance
packing. But my understanding is that since those are often
oversubscribed
I don't understand why the oversubscription ratio matters here?


My experience is with environments where the oversubscription was used to be a 
little loose with how many vCPUs were allocated or how much RAM was allocated 
but disk was strictly controlled.




while disk is not that it's actually the disk amounts
that control the packing on some environments.
Maybe an explanation of what you mean by "packing" here.  Customers that I've 
worked with over the years have used CPU and Mem as their levers and the main 
thing that they care about in terms of how many Instances go on a Node.  I'd 
like to learn more about why that's wrong and that disk space is the mechanism 
that deployers use for this.


By packing I just mean the various ways that different flavors fit on a host. A 
host may be designed to hold 1 xlarge, or 2 large, or 4 mediums, or 1 large and 
2 mediums, etc... The challenge I see here is that the constraint can be 
managed by using CPU or RAM or disk or some combination of the three. For 
deployers just using disk the above patches will change behavior for them.

It's not wrong to use CPU/RAM, but it's not what everyone is doing. One purpose 
of this email was to gauge if it would be acceptable to only use CPU/RAM for 
packing.




But that is a sub option
here, just document that disk amounts should not be used to determine
flavor packing on hosts and instead CPU and RAM must be used.

> Does option 3 covers In case someone relied on eg. flavor root disk for
> disk volume booted from volume - and now instance packing will change
> once patches are implemented?

That's the goal. In a simple case of having hosts with 16 CPUs, 128GB of
RAM and 2TB of disk and a flavor with VCPU=4, RAM=32GB, root_gb=500GB,
swap/ephemeral=0 the deployer is stating that they want only 4 instances
on that host.
How do you arrive at that logic?  What if they actually wanted a single 
VCPU=4,RAM=32GB,root_gb=500 but then they wanted the remaining resources split 
among Instances that were all 1 VCPU, 1 G ram and a 1 G root disk?

My example assumes the one stated flavor. But if they have a smaller flavor 
then more than 4 instances would fit.


If there is CPU and RAM oversubscription enabled then by
using volumes a user could end up with more than 4 instances on that
host. So a max_instances=4 setting could solve that. However I don't
like the idea of adding a new config, and I think it's too simplistic to
cover more complex use cases. But it's an option.

I would venture to guess that most Operators would be sad to read that.  So 
rather than give them an explicit lever that does exactly what they want 
clearly and explicitly we should make it as complex as possible and have it be 
the result of a 4 or 5 variable equation?  Not to mention it's completely 
dynamic (because it seems like
lots of clouds have more than one flavor).

Is that lever exactly what they want? That's part of what I'd like to find out 
here. But currently it's possible to setup a situation where 1 large flavor or 
4 small flavors fit on a host. So would the max_instances=4 setting be desired? 
Keeping in mind that if the above patches merged 4 large flavors could be put 
on that host if they only use remote volumes and aren't using proper CPU/RAM 
limits.

I probably was not clear enough in my original description or made some bad 
assumptions. The concern I have is that if someone is currently relying on disk 
sizes for their instance limits then the above patches change behavior for them 
and affect capacity limits and planning. Is this okay and if not what do we do?


From a single operator perspective, we’d prefer an option which would allow 
boot from volume with a larger size than the flavour. The quota for volumes 
would avoid abuse.

The use cases we encounter are a standard set of flavors with defined 
core/memory/disk ratios which correspond to the 

Re: [openstack-dev] [Magnum] Next auto-scaling feature design?

2016-08-18 Thread Tim Bell

> On 18 Aug 2016, at 09:56, hie...@vn.fujitsu.com wrote:
> 
> Hi Magnum folks,
> 
> I have some interests in our auto scaling features and currently testing with 
> some container monitoring solutions such as heapster, telegraf and 
> prometheus. I have seen the PoC session corporate with Senlin in Austin and 
> have some questions regarding of this design:
> - We have decided to move all container management from Magnum to Zun, so is 
> there only one level of scaling (node) instead of both node and container?
> - The PoC design show that Magnum (Magnum Scaler) need to depend on 
> Heat/Ceilometer for gathering metrics and do the scaling work based on auto 
> scaling policies, but is Heat/Ceilometer is the best choice for Magnum auto 
> scaling? 
> 
> Currently, I saw that Magnum only send CPU and Memory metric to Ceilometer, 
> and Heat can grab these to decide the right scaling method. IMO, this 
> approach have some problems, please take a look and give feedbacks:
> - The AutoScaling Policy and AutoScaling Resource of Heat cannot handle 
> complex scaling policies. For example: 
> If CPU > 80% then scale out
> If Mem < 40% then scale in
> -> What if CPU = 90% and Mem = 30%, the conflict policy will appear.
> There are some WIP patch-set of Heat conditional logic in [1]. But IMO, the 
> conditional logic of Heat also cannot resolve the conflict of scaling 
> policies. For example:
> If CPU > 80% and Mem >70% then scale out
> If CPU < 30% or Mem < 50% then scale in
> -> What if CPU = 90% and Mem = 30%.

What would you like Heat to do in this scenario ? Is it that you would like to 
have a user defined logic option as well as basic conditionals ?

I would expect the same problem to occur in pure Heat scenarios also so a user 
defined scaling policy would probably be of interest there too and avoid code 
duplication.

Tim

> Thus, I think that we need to implement magnum scaler for validating the 
> policy conflicts.
> - Ceilometer may have troubles if we deploy thousands of COE. 
> 
> I think we need a new design for auto scaling feature, not for Magnum only 
> but also Zun (because the scaling level of container maybe forked to Zun 
> too). Here are some ideas:
> 1. Add new field enable_monitor to cluster template (ex baymodel) and show 
> the monitoring URL when creating cluster (bay) complete. For example, we can 
> use Prometheus as monitoring container for each cluster. (Heapster is the 
> best choice for k8s, but not good enough for swarm or mesos).
> 2. Create Magnum scaler manager (maybe a new service):
> - Monitoring enabled monitor cluster and send metric to ceilometer if need.
> - Manage user-defined scaling policy: not only cpu and memory but also other 
> metrics like network bw, CCU.
> - Validate user-defined scaling policy and trigger heat for scaling actions. 
> (can trigger nova-scheduler for more scaling options)
> - Need highly scalable architecture, first step we can implement simple 
> validator method but in the future, there are some other approach such as 
> using fuzzy logic or AI to make an appropriate decision.
> 
> Some use case for operators:
> - I want to create a k8s cluster, and if CCU or network bandwidth is high 
> please scale-out X nodes in other regions.
> - I want to create swarm cluster, and if CPU or memory is too high, please 
> scale-out X nodes to make sure total CPU and memory is about 50%.
> 
> What do you think about these above ideas/problems?
> 
> [1]. https://blueprints.launchpad.net/heat/+spec/support-conditions-function
> 
> Thanks,
> Hieu LE.
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][heat] 2 million requests / sec, 100s of nodes

2016-08-08 Thread Tim Bell

On 08 Aug 2016, at 11:51, Ricardo Rocha 
> wrote:

Hi.

On Mon, Aug 8, 2016 at 1:52 AM, Clint Byrum 
> wrote:
Excerpts from Steve Baker's message of 2016-08-08 10:11:29 +1200:
On 05/08/16 21:48, Ricardo Rocha wrote:
Hi.

Quick update is 1000 nodes and 7 million reqs/sec :) - and the number
of requests should be higher but we had some internal issues. We have
a submission for barcelona to provide a lot more details.

But a couple questions came during the exercise:

1. Do we really need a volume in the VMs? On large clusters this is a
burden, and local storage only should be enough?

2. We observe a significant delay (~10min, which is half the total
time to deploy the cluster) on heat when it seems to be crunching the
kube_minions nested stacks. Once it's done, it still adds new stacks
gradually, so it doesn't look like it precomputed all the info in advance

Anyone tried to scale Heat to stacks this size? We end up with a stack
with:
* 1000 nested stacks (depth 2)
* 22000 resources
* 47008 events

And already changed most of the timeout/retrial values for rpc to get
this working.

This delay is already visible in clusters of 512 nodes, but 40% of the
time in 1000 nodes seems like something we could improve. Any hints on
Heat configuration optimizations for large stacks very welcome.

Yes, we recommend you set the following in /etc/heat/heat.conf [DEFAULT]:
max_resources_per_stack = -1

Enforcing this for large stacks has a very high overhead, we make this
change in the TripleO undercloud too.


Wouldn't this necessitate having a private Heat just for Magnum? Not
having a resource limit per stack would leave your Heat engines
vulnerable to being DoS'd by malicious users, since one can create many
many thousands of resources, and thus python objects, in just a couple
of cleverly crafted templates (which is why I added the setting).

This makes perfect sense in the undercloud of TripleO, which is a
private, single tenant OpenStack. But, for Magnum.. now you're talking
about the Heat that users have access to.

We have it already at -1 for these tests. As you say a malicious user
could DoS, right now this is manageable in our environment. But maybe
move it to a per tenant value, or some special policy? The stacks are
created under a separate domain for magnum (for trustees), we could
also use that for separation.


If there was a quota system within Heat for items like stacks and resources, 
this could be
controlled through that.

Looks like https://blueprints.launchpad.net/heat/+spec/add-quota-api-for-heat 
did not make it into upstream though.

Tim

A separate heat instance sounds like an overkill.

Cheers,
Ricardo


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-05 Thread Tim Bell

> On 05 Aug 2016, at 01:02, Jay Pipes  wrote:
> 
> On 08/04/2016 06:40 PM, Clint Byrum wrote:
>> Excerpts from Jay Pipes's message of 2016-08-04 18:14:46 -0400:
>>> On 08/04/2016 05:30 PM, Clint Byrum wrote:
 Excerpts from Fox, Kevin M's message of 2016-08-04 19:20:43 +:
> I disagree. I see glare as a superset of the needs of the image api and 
> one feature I need thats image related was specifically shot down as "the 
> artefact api will solve that".
> 
> You have all the same needs to version/catalog/store images. They are not 
> more special then a versioned/cataloged/stored heat templates, murano 
> apps, tuskar workflows, etc. I've heard multiple times, members of the 
> glance team saying  that once glare is fully mature, they could stub out 
> the v1/v2 glance apis on top of glare. What is the benefit to splitting 
> if the end goal is to recombine/make one project irrelevant?
> 
> This feels like to me, another case of an established, original tent 
> project not wanting to deal with something that needs to be dealt with, 
> and instead pushing it out to another project with the hope that it just 
> goes away. With all the traction non original tent projects have gotten 
> since the big tent was established, that might be an accurate conclusion, 
> but really bad for users/operators of OpenStack.
> 
> I really would like glance/glare to reconsider this stance. OpenStack 
> continuously budding off projects is not a good pattern.
> 
 
 So very this.
>>> 
>>> Honestly, operators need to move past the "oh, not another service to
>>> install/configure" thing.
>>> 
>>> With the whole "microservice the world" movement, that ship has long
>>> since sailed, and frankly, the cost of adding another microservice into
>>> the deployment at this point is tiny -- it should be nothing more than a
>>> few lines in a Puppet manifest, Chef module, Ansible playbook, or Salt
>>> state file.
>>> 
>>> If you're doing deployment right, adding new services to the
>>> microservice architecture that OpenStack projects are being pushed
>>> towards should not be an issue.
>>> 
>>> I find it odd that certain folks are pushing hard for the
>>> shared-nothing, microservice-it-all software architecture and yet
>>> support this mentality that adding another couple (dozen if need be)
>>> lines of configuration data to a deployment script is beyond the pale to
>>> ask of operators.
>>> 
>> 
>> Agreed, deployment isn't that big of a deal. I actually thought Kevin's
>> point was that the lack of focus was the problem. I think the point in
>> bringing up deployment is simply that it isn't free, not that it's the
>> reason to combine the two.
> 
> My above statement was more directed to Kevin and Tim, both of whom indicated 
> that adding another service to the deployment was a major problem.
> 

The difficulty I have with additional projects is that there are often major 
parts missing in order to deploy in production. Packaging, Configuration 
management manifests, Monitoring etc. are not part
of the standard deliverables but are left to other teams. Having had to fill in 
these gaps for 4 OpenStack projects so far already, they are not trivial to do 
and I feel the effort required
for this was not considered as part of the split decision.

 It's clear there's been a disconnect in expectations between the outside
 and inside of development.
 
 The hope from the outside was that we'd end up with a user friendly
 frontend API to artifacts, that included more capability for cataloging
 images.  It sounds like the two teams never actually shared that vision
 and remained two teams, instead of combining into one under a shared
 vision.
 
 Thanks for all your hard work, Glance and Glare teams. I don't think
 any of us can push a vision on you. But, as Kevin says above: consider
 addressing the lack of vision and cooperation head on, rather than
 turning your backs on each-other. The users will sing your praises if
 you can get it done.
>>> 
>>> It's been three years, two pre-big-tent TC graduation reviews (one for a
>>> split out murano app catalog, one for the combined project team being
>>> all things artifact), and over that three years, the original Glance
>>> project has at times crawled to a near total stop from a contribution
>>> perspective and not indicated much desire to incorporate the generic
>>> artifacts API or code. Time for this cooperation came and went with
>>> ample opportunities.
>>> 
>>> The Glare project is moving on.
>> 
>> The point is that this should be reconsidered, and that these internal
>> problems, now surfaced, seem surmountable if there's actually a reason
>> to get past them. Since it seems from the start, Glare and Glance never
>> actually intended to converge on a generic artifacts API, but rather
>> to simply tolerate one 

Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-04 Thread Tim Bell

> On 04 Aug 2016, at 19:34, Erno Kuvaja  wrote:
> 
> On Thu, Aug 4, 2016 at 5:20 PM, Clint Byrum  wrote:
>> Excerpts from Tim Bell's message of 2016-08-04 15:55:48 +:
>>> 
>>> On 04 Aug 2016, at 17:27, Mikhail Fedosin 
>>> > wrote:
 
 Hi all,
>> after 6 months of Glare v1 API development we have decided to continue
 our work in a separate project in the "openstack" namespace with its own
 core team (me, Kairat Kushaev, Darja Shkhray and the original creator -
 Alexander Tivelkov). We want to thank Glance community for their support
 during the incubation period, valuable advice and suggestions - this time
 was really productive for us. I believe that this step will allow the
 Glare project to concentrate on feature development and move forward
 faster. Having the independent service also removes inconsistencies
 in understanding what Glance project is: it seems that a single project
 cannot own two different APIs with partially overlapping functionality. So
 with the separation of Glare into a new project, Glance may continue its
 work on the OpenStack Images API, while Glare will become the reference
 implementation of the new OpenStack Artifacts API.
 
>>> 
>>> I would suggest looking at more than just the development process when
>>> reflecting on this choice.
>>> While it may allow more rapid development, doing on your own will increase
>>> costs for end users and operators in areas like packaging, configuration,
>>> monitoring, quota … gaining critical mass in production for Glare will
>>> be much more difficult if you are not building on the Glance install base.
>> 
>> I have to agree with Tim here. I respect that it's difficult to build on
>> top of Glance's API, rather than just start fresh. But, for operators,
>> it's more services, more API's to audit, and more complexity. For users,
>> they'll now have two ways to upload software to their clouds, which is
>> likely to result in a large portion just ignoring Glare even when it
>> would be useful for them.
>> 
>> What I'd hoped when Glare and Glance combined, was that there would be
>> a single API that could be used for any software upload and listing. Is
>> there any kind of retrospective or documentation somewhere that explains
>> why that wasn't possible?
>> 
> 
> I was planning to leave this branch on it's own, but I have to correct
> something here. This split is not introducing new API, it's moving the
> new Artifact API under it's own project, there was no shared API in
> first place. Glare was to be it's own service already within Glance
> project. Also the Artifacts API turned out to be fundamentally
> incompatible with the Images APIs v1 & v2 due to the totally different
> requirements. And even the option was discussed in the community I
> personally think replicating Images API and carrying the cost it being
> in two services that are fundamentally different would have been huge
> mistake we would have paid for long time. I'm not saying that it would
> have been impossible, but there is lots of burden in Images APIs that
> Glare really does not need to carry, we just can't get rid of it and
> likely no-one would have been happy to see Images API v3 around the
> time when we are working super hard to get the v1 users moving to v2.
> 
> Packaging glance-api, glance-registry and glare-api from glance repo
> would not change the effort too much compared from 2 repos either.
> Likely it just makes it easier when the logical split it clear from
> the beginning.
> 
> What comes to Tim's statement, I do not see how Glare in it's own
> service with it's own API could ride on the Glance install base apart
> from the quite false mental image these two thing being the same and
> based on the same code.
> 

To give a concrete use case, CERN have Glance deployed for images.  We are 
interested in the ecosystem
around Murano and are actively using Heat.  We deploy using RDO with RPM 
packages, Puppet-OpenStack
for configuration, a set of machines serving Glance in an HA set up across 
multiple data centres  and various open source monitoring tools.

The multitude of projects and the day two maintenance scenarios with 11 
independent projects is a cost and adding further to this cost for the 
production deployments of OpenStack should not be ignored.

By Glare choosing to go their own way, does this mean that

- Can the existing RPM packaging for Glance be used to deploy Glare ? If there 
needs to be new packages defined, this is additional cost for the RDO team (and 
the equivalent .deb teams) or will the Glare team provide this ?
- Can we use our existing templates for Glance for configuration management ? 
If there need to be new ones defined, this is additional work for the Chef and 
Ansible teams or will the Glare team provide this ?
- Log consolidation and parsing using the various OsOps 

Re: [openstack-dev] [Glance][TC][Heat][App-Catalog][Murano][Tacker] Glare as a new Project

2016-08-04 Thread Tim Bell

On 04 Aug 2016, at 17:27, Mikhail Fedosin 
> wrote:

Hi all,
after 6 months of Glare v1 API development we have decided to continue our work 
in a separate project in the "openstack" namespace with its own core team (me, 
Kairat Kushaev, Darja Shkhray and the original creator - Alexander Tivelkov). 
We want to thank Glance community for their support during the incubation 
period, valuable advice and suggestions - this time was really productive for 
us. I believe that this step will allow the Glare project to concentrate on 
feature development and move forward faster. Having the independent service 
also removes inconsistencies in understanding what Glance project is: it seems 
that a single project cannot own two different APIs with partially overlapping 
functionality. So with the separation of Glare into a new project, Glance may 
continue its work on the OpenStack Images API, while Glare will become the 
reference implementation of the new OpenStack Artifacts API.


I would suggest looking at more than just the development process when 
reflecting on this choice.
While it may allow more rapid development, doing on your own will increase 
costs for end users and operators in areas like packaging, configuration, 
monitoring, quota … gaining critical mass in production for Glare will be much 
more difficult if you are not building on the Glance install base.
Tim
Nevertheless, Glare team would like to continue to collaborate with the Glance 
team in a new - cross-project - format. We still have lots in common, both in 
code and usage scenarios, so we are looking forward for fruitful work with the 
rest of the Glance team. Those of you guys who are interested in Glare and the 
future of Artifacts API are also welcome to join the Glare team: we have a lot 
of really exciting tasks and will always welcome new members.
Meanwhile, despite the fact that my focus will be on the new project, I will 
continue to be part of the Glance team and for sure I'm going to contribute in 
Glance, because I am interested in this project and want to help it be 
successful.

We'll have the formal patches pushed to project-config earlier next week, 
appropriate repositories, wiki and launchpad space will be created soon as 
well.  Our regular weekly IRC meeting remains intact: it is 17:30 UTC Mondays 
in #openstack-meeting-alt, it will just become a Glare project meeting instead 
of a Glare sub-team meeting. Please feel free to join!

Best regards,
Mikhail Fedosin

P.S. For those of you who may be curious on the project name. We'll still be 
called "Glare", but since we are on our own now this acronym becomes recursive: 
GLARE now stands for "GLare Artifact REpository" :)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] persistently single-vendor projects

2016-08-02 Thread Tim Bell

> On 02 Aug 2016, at 17:13, Hayes, Graham  wrote:
> 
> On 02/08/2016 15:42, Flavio Percoco wrote:
>> On 01/08/16 10:19 -0400, Sean Dague wrote:
>>> On 08/01/2016 09:58 AM, Davanum Srinivas wrote:
 Thierry, Ben, Doug,
 
 How can we distinguish between. "Project is doing the right thing, but
 others are not joining" vs "Project is actively trying to keep people
 out"?
>>> 
>>> I think at some level, it's not really that different. If we treat them
>>> as different, everyone will always believe they did all the right
>>> things, but got no results. 3 cycles should be plenty of time to drop
>>> single entity contributions below 90%. That means prioritizing bugs /
>>> patches from outside groups (to drop below 90% on code commits),
>>> mentoring every outside member that provides feedback (to drop below 90%
>>> on reviews), shifting development resources towards mentoring / docs /
>>> on ramp exercises for others in the community (to drop below 90% on core
>>> team).
>>> 
>>> Digging out of a single vendor status is hard, and requires making that
>>> your top priority. If teams aren't interested in putting that ahead of
>>> development work, that's fine, but that doesn't make it a sustainable
>>> OpenStack project.
>> 
>> 
>> ++ to the above! I don't think they are that different either and we might 
>> not
>> need to differentiate them after all.
>> 
>> Flavio
>> 
> 
> I do have one question - how are teams getting out of
> "team:single-vendor" and towards "team:diverse-affiliation" ?
> 
> We have tried to get more people involved with Designate using the ways
> we know how - doing integrations with other projects, pushing designate
> at conferences, helping DNS Server vendors to add drivers, adding
> drivers for DNS Servers and service providers ourselves, adding
> features - the lot.
> 
> We have a lot of user interest (41% of users were interested in using
> us), and are quite widely deployed for a non tc-approved-release
> project (17% - 5% in production). We are actually the most deployed
> non tc-approved-release project.
> 
> We still have 81% of the reviews done by 2 companies, and 83% by 3
> companies.
> 
> I know our project is not "cool", and DNS is probably one of the most
> boring topics, but I honestly believe that it has a place in the
> majority of OpenStack clouds - both public and private. We are a small
> team of people dedicated to making Designate the best we can, but are
> still one company deciding to drop OpenStack / DNS development from
> joining the single-vendor party.
> 
> We are definitely interested in putting community development ahead of
> development work - but what that actual work is seems to difficult to
> nail down. I do feel sometimes that I am flailing in the dark trying to
> improve this.
> 
> If projects could share how that got out of single-vendor or into 
> diverse-affiliation this could really help teams progress in the
> community, and avoid being removed.
> 
> Making grand statements about "work harder on community" without any
> guidance about what we need to work on do not help the community.
> 
> - Graham
> 
> 

Interesting thread… it raises some questions for me

- Some projects in the big tent are inter-related. For example, if we identify 
a need for a project in our production cloud, we contribute a puppet module 
upstream into the openstack-puppet project. If the project is then evicted, 
does this mean that the puppet module would also be removed from the puppet 
openstack project ? Documentation repositories ? 

- Operators considering including a project in their cloud portfolio look at 
various criteria in places like the project navigator. If a project does not 
have diversity, there is a risk that it would not remain in the big tent after 
an 18 month review of diversity. An operator may therefore delay their testing 
and production deployment of that project which makes it more difficult to 
achieve the diversity given lack of adoption.

I think there is a difference between projects which are meeting a specific set 
of needs with the user community but are not needing major support and one 
which is not meeting the 4 opens. We’ve really appreciated projects which solve 
a need for us such as EC2 API and RDO which have been open but also had 
significant support from a vendor. They could have improved their diversity by 
submitting less commits to get the percentages better...

Tim

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [glance][nova] Globally disabling hw_qemu_guest_agent support

2016-07-28 Thread Tim Bell
Looking at the number of options for image properties, it would seem that a 
blacklist would be in order. I would be in favour for ‘standard’ images which 
support fsfreeze to support guest agent and that some of the NUMA properties 
not be available for end user images, but still for system ones.

How about a list of delegated properties for images which could override the 
default flavor settings ?

Tim

On 20/07/16 00:40, "Daniel Russell"  wrote:

Hi Daniel,

Fair enough.  I don't personally understand your stance against having a 
configuration option to specifically disable guest agent but imagine there 
would be advantages to having a more generic implementation that can handle 
more use-cases (any property instead of just a specific property).  I imagine 
there will need to be a nova scheduler component to it as well (Or we might 
schedule an instance on a hypervisor that is configured not to allow it).

Is there a blueprint or spec for this kind of thing yet?  I can help put one 
together if there is interest but the implementation is probably for more 
seasoned developers.

Regards,
Dan.

-Original Message-
From: Daniel P. Berrange [mailto:berra...@redhat.com] 
Sent: Tuesday, 19 July 2016 6:39 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [glance][nova] Globally disabling 
hw_qemu_guest_agent support

On Tue, Jul 19, 2016 at 12:51:07AM +, Daniel Russell wrote:
> Hi Erno,
> 
> For the size of team I am in I think it would work well but it feels 
> like I am putting the security of Nova in the hands of Glance.

Yep, from an architectural pov it is not very good. Particularly in a 
multi-hypervisor compute deployment you can have the situation where yoyu want 
to allow a property for one type of hypervisor but forbid it for another.

What we really need is the exact same image property security restrictions 
implemented by nova-compute, so we can setup compute nodes to blacklist certain 
properties.

> 
> What I was more after was a setting in Nova that says 'this hypervisor 
> does not allow guest sockets and will ignore any attempt to create 
> them', 'this hypervisor always creates guest sockets regardless of 
> your choice', 'this hypervisor will respect whatever you throw in 
> hw_qemu_guest_agent with a default of no', or 'this hypervisor will 
> respect whatever you throw in hw_qemu_guest_agent with a default of 
> yes'.  It feels like a more appropriate place to control and manage that kind 
> of configuration.

Nope, there's no such facility right now - glance property protection is the 
only real option. I'd be very much against adding a lockdown which was specific 
to the guest agent too - if we did anything it would be to have a generic 
property protection model in nova that mirrors what glance supports.

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][nova-docker] Retiring nova-docker project

2016-07-08 Thread Tim Bell
Given the inconsistency between the user survey and the apparent 
utilization/maintenance, I’d recommend checking on the openstack-operators list 
to see if this would be a major issue.

Tim

On 08/07/16 13:54, "Amrith Kumar"  wrote:

Did not realize that; I withdraw my request. You are correct; 12 months+ is 
fair warning.

-amrith

P.S. I volunteered (in Ann Arbor) to work with you and contribute to 
nova-docker but I guess that's now moot; it'd have been fun :)

> -Original Message-
> From: Davanum Srinivas [mailto:dava...@gmail.com]
> Sent: Friday, July 08, 2016 7:32 AM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [nova][nova-docker] Retiring nova-docker
> project
> 
> Amrith,
> 
> A year and few months is sufficient notice:
> http://markmail.org/message/geijiljch4yxfcvq
> 
> I really really want this to go away. Every time this comes up,
> example it came up in Austin too, a few people raise their hands and
> then do not show up. (Not saying you will do the same!).
> 
> -- Dims
> 
> On Fri, Jul 8, 2016 at 7:10 AM, Amrith Kumar  wrote:
> > Does it make sense that this conversation about the merits of nova-
> docker be had before the retirement is actually initiated. It seems odd
> that in the face of empirical evidence of actual use (user survey) we
> merely hypothesize that people are likely using their own forks and
> therefore it is fine to retire this project.
> >
> > As ttx indicates there is nothing wrong with a project with low
> activity. That said, if the issue is that nova-docker is not actively
> maintained and broken, then what it needs is contributors not retirement.
> >
> > -amrith
> >
> >> -Original Message-
> >> From: Daniel P. Berrange [mailto:berra...@redhat.com]
> >> Sent: Friday, July 08, 2016 5:03 AM
> >> To: OpenStack Development Mailing List (not for usage questions)
> >> 
> >> Subject: Re: [openstack-dev] [nova][nova-docker] Retiring nova-docker
> >> project
> >>
> >> On Fri, Jul 08, 2016 at 10:11:59AM +0200, Thierry Carrez wrote:
> >> > Matt Riedemann wrote:
> >> > > [...]
> >> > > Expand the numbers to 6 months and you'll see only 13 commits.
> >> > >
> >> > > It's surprisingly high in the user survey (page 39):
> >> > >
> >> > > https://www.openstack.org/assets/survey/April-2016-User-Survey-
> >> Report.pdf
> >> > >
> >> > > So I suspect most users/deployments are just running their own
> forks.
> >> >
> >> > Why ? Is it completely unusable as it stands ? 13 commits in 6 months
> >> sounds
> >> > like enough activity to keep something usable (if it was usable in
> the
> >> first
> >> > place). We have a lot of (official) projects and libraries with less
> >> > activity than that :)
> >> >
> >> > I'm not sure we should be retiring an unofficial project if it's
> usable,
> >> > doesn't have critical security issues and is used by a number of
> >> people...
> >> > Now, if it's unusable and abandoned, that's another story.
> >>
> >> Nova explicitly provides *zero* stable APIs for out of tree drivers to
> >> use. Changes to Nova internals will reliably break out of tree drivers
> >> at least once during a development cycle, often more. So you really do
> >> need someone committed to updating out of tree drivers to cope with the
> >> fact that they're using an explicitly unstable API. We actively intend
> >> to keep breaking out of tree drivers as often as suits Nova's best
> >> interests.
> >>
> >> Regards,
> >> Daniel
> >> --
> >> |: http://berrange.com  -o-
> http://www.flickr.com/photos/dberrange/
> >> :|
> >> |: http://libvirt.org  -o- http://virt-
> manager.org
> >> :|
> >> |: http://autobuild.org   -o-
> http://search.cpan.org/~danberr/
> >> :|
> >> |: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-
> vnc
> >> :|
> >>
> >>
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> --
> Davanum Srinivas :: https://twitter.com/dims
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack 

Re: [openstack-dev] [Magnum] The Magnum Midcycle

2016-06-09 Thread Tim Bell
If we can confirm the dates and location, there is a reasonable chance we could 
also offer remote conferencing using Vidyo at CERN. While it is not the same as 
an F2F experience, it would provide the possibility for remote participation 
for those who could not make it to Geneva.

We may also be able to organize tours, such as to the anti-matter factory and 
super conducting magnet test labs prior or afterwards if anyone is interested…

Tim

From: Spyros Trigazis 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday 8 June 2016 at 16:43
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle

Hi Hongbin.

CERN's location: https://goo.gl/maps/DWbDVjnAvJJ2

Cheers,
Spyros


On 8 June 2016 at 16:01, Hongbin Lu 
> wrote:
Ricardo,

Thanks for the offer. Would I know where is the exact location?

Best regards,
Hongbin

> -Original Message-
> From: Ricardo Rocha 
> [mailto:rocha.po...@gmail.com]
> Sent: June-08-16 5:43 AM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Magnum] The Magnum Midcycle
>
> Hi Hongbin.
>
> Not sure how this fits everyone, but we would be happy to host it at
> CERN. How do people feel about it? We can add a nice tour of the place
> as a bonus :)
>
> Let us know.
>
> Ricardo
>
>
>
> On Tue, Jun 7, 2016 at 10:32 PM, Hongbin Lu 
> >
> wrote:
> > Hi all,
> >
> >
> >
> > Please find the Doodle pool below for selecting the Magnum midcycle
> date.
> > Presumably, it will be a 2 days event. The location is undecided for
> now.
> > The previous midcycles were hosted in bay area so I guess we will
> stay
> > there at this time.
> >
> >
> >
> > http://doodle.com/poll/5tbcyc37yb7ckiec
> >
> >
> >
> > In addition, the Magnum team is finding a host for the midcycle.
> > Please let us know if you interest to host us.
> >
> >
> >
> > Best regards,
> >
> > Hongbin
> >
> >
> >
> __
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] Replacing user based policies in Nova

2016-06-03 Thread Tim Bell

With https://review.openstack.org/324068 (thanks ☺), the key parts of user 
based policies as currently deployed would be covered in the short term. 
However, my understanding is that this functionality needs to be replaced with 
something sustainable in the long term and consistent with the approach that 
permissions should be on a per-project basis rather than a per-instance/object.

Looking at the use cases:


-  Shared pools of quota between smaller teams

-  Protection from a VM created by one team being shutdown/deleted/etc 
by another

I think much of this could be handled using nested projects in the future.

Specifically,


-  Given a project ‘long tail’, smaller projects could be created under 
that which would share the total ‘long tail’ quota with other siblings

-  Project ‘higgs’ could be a sub-project of ‘long tail’ and have its 
own role assignments so that the members of the team of sub-project ‘diphoton’ 
could not affect the ‘higgs’ VMs

-  The administrator of the project ‘long tail’ would be responsible 
for setting up the appropriate user<->role mappings for the sub projects and 
not require tickets to central support teams

-  This could potentially be taken to the ‘personal project’ use case 
following the implementation of https://review.openstack.org/#/c/324055 in 
Keystone and implementation in other projects

Does this sound doable ?

The major missing piece that I would see for the implementation would the 
nested quotas in Nova/Cinder. The current structure seems to be try to build a 
solution on top of the delimiter library but this is early days.

I’d be happy for feedback on the technical viability of this proposal and then 
I can review with those who have raised the need to see if it would work for 
them.

Tim


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] NUMA, huge pages, and scheduling

2016-06-03 Thread Tim Bell
The documentation at http://docs.openstack.org/admin-guide/compute-flavors.html 
is gradually improving. Are there areas which were not covered in your 
clarifications ? If so, we should fix the documentation too since this is a 
complex area to configure and good documentation is a great help.

BTW, there is also an issue around how the RAM for the BIOS is shadowed. I 
can’t find the page from a quick google but we found an imbalance when we used 
2GB pages as the RAM for BIOS shadowing was done by default in the memory space 
for only one of the NUMA spaces.

Having a look at the KVM XML can also help a bit if you are debugging.

Tim

From: Paul Michali 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Friday 3 June 2016 at 15:18
To: "Daniel P. Berrange" , "OpenStack Development Mailing 
List (not for usage questions)" 
Subject: Re: [openstack-dev] [nova] NUMA, huge pages, and scheduling

See PCM inline...
On Fri, Jun 3, 2016 at 8:44 AM Daniel P. Berrange 
> wrote:
On Fri, Jun 03, 2016 at 12:32:17PM +, Paul Michali wrote:
> Hi!
>
> I've been playing with Liberty code a bit and had some questions that I'm
> hoping Nova folks may be able to provide guidance on...
>
> If I set up a flavor with hw:mem_page_size=2048, and I'm creating (Cirros)
> VMs with size 1024, will the scheduling use the minimum of the number of

1024 what units ? 1024 MB, or 1024 huge pages aka 2048 MB ?

PCM: I was using small flavor, which is 2 GB. So that's 2048 MB and the page 
size is 2048K, so 1024 pages? Hope I have the units right.



> huge pages available and the size requested for the VM, or will it base
> scheduling only on the number of huge pages?
>
> It seems to be doing the latter, where I had 1945 huge pages free, and
> tried to create another VM (1024) and Nova rejected the request with "no
> hosts available".

From this I'm guessing you're meaning 1024 huge pages aka 2 GB earlier.

Anyway, when you request huge pages to be used for a flavour, the
entire guest RAM must be able to be allocated from huge pages.
ie if you have a guest with 2 GB of RAM, you must have 2 GB worth
of huge pages available. It is not possible for a VM to use
1.5 GB of huge pages and 500 MB of normal sized pages.

PCM: Right, so, with 2GB of RAM, I need 1024 huge pages of size 2048K. In this 
case, there are 1945 huge pages available, so I was wondering why it failed. 
Maybe I'm confusing sizes/pages?



> Is this still the same for Mitaka?

Yep, this use of huge pages has not changed.

> Where could I look in the code to see how the scheduling is determined?

Most logic related to huge pages is in nova/virt/hardware.py

> If I use mem_page_size=large (what I originally had), should it evenly
> assign huge pages from the available NUMA nodes (there are two in my case)?
>
> It looks like it was assigning all VMs to the same NUMA node (0) in this
> case. Is the right way to change to 2048, like I did above?

Nova will always avoid spreading your VM across 2 host NUMA nodes,
since that gives bad performance characteristics. IOW, it will always
allocate huge pages from the NUMA node that the guest will run on. If
you explicitly want your VM to spread across 2 host NUMA nodes, then
you must tell nova to create 2 *guest* NUMA nodes for the VM. Nova
will then place each guest NUMA node, on a separate host NUMA node
and allocate huge pages from node to match. This is done using
the hw:numa_nodes=2 parameter on the flavour

PCM: Gotcha, but that was not the issue I'm seeing. With this small flavor (2GB 
= 1024 pages), I had 13107 huge pages initially. As I created VMs, they were 
*all* placed on the same NUMA node (0). As a result, when I got to more than 
have the available pages, Nova failed to allow further VMs, even though I had 
6963 available on one compute node, and 5939 on another.

It seems that all the assignments were to node zero. Someone suggested to me to 
set mem_page_size to 2048, and at that point it started assigning to both NUMA 
nodes evenly.

Thanks for the help!!!


Regards,

PCM


> Again, has this changed at all in Mitaka?

Nope. Well aside from random bug fixes.

Regards,
Daniel
--
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Re: [openstack-dev] [craton] meeting notes

2016-05-16 Thread Tim Bell
Thanks for the notes.

Is the agreement to work using Craton as a base for the fleet management ?

Tim

From: sean roberts 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Monday 16 May 2016 at 17:51
To: "openstack-dev@lists.openstack.org" 
Subject: [openstack-dev] [craton] meeting notes

first team meeting notes here 
https://etherpad.openstack.org/p/craton-meeting-2016-05-16

We have two weeks until M1, which is a good date to finalize what features we 
will have in the newton release.

We proposed the project name Craton to stick. Let's propose an alternative here 
quickly if anyone wants.

We agreed to hold meetbot IRC meetings 07:30 PDT. The #openstack-meeting-4 slot 
is available.

We agreed to move all email to 
openstack-dev@lists.openstack.org 
with the subject starting with [craton]

The chat.freenode.net IRC channel #craton is up and 
running, but not logged yet.

We want to target getting together face to face around the time of the yet to 
be scheduled Operators Mid-Cycle. 19 July Watcher mid-cycle in hillsboro, OR 
may be the area and time.

I am happy to patch both the IRC meeting and craton logging patches, as soon as 
we decide.

I would recommend that we decide on these items today or tomorrow, so we can 
get started on important project work.

~ sean
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] How to document 'labels'

2016-05-12 Thread Tim Bell
I’d be in favor of 1.

At the end of the man page or full help text, a URL could be useful for more 
information but since most people using the CLI will have to do a context 
change to access the docs, it is not a simple click but a copy/paste/find the 
browser window which is not so friendly.

Tim

From: Jamie Hannaford 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday 12 May 2016 at 16:04
To: "OpenStack Development Mailing List (not for usage questions)" 

Cc: Qun XK Wang 
Subject: Re: [openstack-dev] [magnum] How to document 'labels'


+1 for 1 and 3.



I'm not sure maintainability should discourage us from exposing information to 
the user through the client - we'll face the same maintenance burden as we 
currently do, and IMO it's our job as a team to ensure our docs are up-to-date. 
Any kind of input which touches the API should also live in the API docs, 
because that's in line with every other OpenStack service.



I don't think I've seen documentation exposed via the API before (#2). I think 
it's a lot of work too, and I don't see what benefit it provides.



Jamie




From: Hongbin Lu 
Sent: 11 May 2016 21:52
To: OpenStack Development Mailing List (not for usage questions)
Cc: Qun XK Wang
Subject: [openstack-dev] [magnum] How to document 'labels'

Hi all,

This is a continued discussion from the last team meeting. For recap, ‘labels’ 
is a property in baymodel and is used by users to input additional key-pair 
pairs to configure the bay. In the last team meeting, we discussed what is the 
best way to document ‘labels’. In general, I heard three options:

1.   Place the documentation in Magnum CLI as help text (as Wangqun 
proposed [1][2]).

2.   Place the documentation in Magnum server and expose them via the REST 
API. Then, have the CLI to load help text of individual properties from Magnum 
server.

3.   Place the documentation in a documentation server (like 
developer.openstack.org/…), and add the doc link to the CLI help text.
For option #1, I think an advantage is that it is close to end-users, thus 
providing a better user experience. In contrast, Tom Cammann pointed out a 
disadvantage that the CLI help text might be easier to become out of date. For 
option #2, it should work but incurs a lot of extra work. For option #3, the 
disadvantage is the user experience (since user need to click the link to see 
the documents) but it makes us easier to maintain. I am thinking if it is 
possible to have a combination of #1 and #3. Thoughts?

[1] https://review.openstack.org/#/c/307631/
[2] https://review.openstack.org/#/c/307642/

Best regards,
Hongbin


Rackspace International GmbH a company registered in the Canton of Zurich, 
Switzerland (company identification number CH-020.4.047.077-1) whose registered 
office is at Pfingstweidstrasse 60, 8005 Zurich, Switzerland. Rackspace 
International GmbH privacy policy can be viewed at 
www.rackspace.co.uk/legal/swiss-privacy-policy - This e-mail message may 
contain confidential or privileged information intended for the recipient. Any 
dissemination, distribution or copying of the enclosed material is prohibited. 
If you receive this transmission in error, please notify us immediately by 
e-mail at ab...@rackspace.com and delete the original message. Your cooperation 
is appreciated.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project][quotas][delimiter] Austin Summit - Design Session Summary

2016-05-12 Thread Tim Bell


On 12/05/16 02:59, "Nikhil Komawar"  wrote:

>Thanks Josh about your reply. It's helpful.
>
>The attempt of this cross project work is to come up with a standard way
>of implementing quota logic that can be used by different services.
>Currently, different projects have their individual implementations and
>there are many learning lessons. The library is supposed to be born out
>of that shared wisdom.
>
>Hence, it needs to be an independent library that can make progress in a
>way, to be successfully adopted and vetted upon by cross project cases;
>but not necessarily enforce cross project standardization for projects
>to adopt it in a particular way.
>
>
>So, not oslo for now at least [1]. BigTent? -- I do not know the
>consequences of it not being in BigTent. We do not need design summit
>slot dedicated for this project, neither do we need to have elections,
>nor it is a big enough project to be coordinated with a specific release
>milestone (Newton, Ocata, etc.). The team, though, does follow the four
>opens [2]. So, we can in future go for either option as needed. As long
>as it lives under openstack/delimiter umbrella, runs the standard gate
>tests, follows the release process of openstack for libraries (but not
>necessarily require intervention of the release team), we are happy.
>

I think it will be really difficult to persuade the mainstream projects to adopt
a library if it is not part of Oslo. Developing a common library for quota
management outside the scope of the common library framework for OpenStack
does not seem to be encouraging the widespread use of delimiter.

What are the issues with being part of oslo ?

Is it that oslo may not want the library or are there constraints that it would 
impose 
on the development ? 

Tim

>
>[1] Personally, I do not care of where it lives after it has been
>adopted by a few different projects. But let's keep the future
>discussions in the Pandora's box for now.
>
>[2] The four opens http://governance.openstack.org/reference/opens.html
>
>On 5/11/16 7:16 PM, Joshua Harlow wrote:
>> So it was under my belief that at its current stage that this library
>> would start off on its own, and not initially start of (just yet) in
>> oslo (as I think the oslo group wants to not be the
>> blocker/requirement for a library being a successful thing + the cost
>> of it being in oslo may not be warranted yet).
>>
>> If in the future we as a community think it is better under oslo (and
>> said membership into oslo will help); then I'm ok with it being
>> there... I just know that others (in the oslo group) have other
>> thoughts here (and hopefully they can chime in).
>>
>> Part of this is also being refined in
>> https://review.openstack.org/#/c/312233/ and that hopefully can be a
>> guideline for new libraries that come along.
>>
>> -Josh
>>
>> Andreas Jaeger wrote:
>>> Since the review [1] to create the repo is up now, I have one question:
>>> This is a cross-project effort, so what is it's governance?
>>>
>>> The review stated it will be an independent project outside of the big
>>> tent - but seeing that this should become a common part for core
>>> projects and specific to OpenStack, I wonder whether that is the right
>>> approach. It fits nicely into Oslo as cross-project library - or it
>>> could be an independent team on its own in the Big Tent.
>>>
>>> But cross-project and outside of Big Tent looks very strange to me,
>>>
>>> Andreas
>>>
>>> [1] https://review.openstack.org/284454
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>-- 
>
>Thanks,
>Nikhil
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-10 Thread Tim Bell
From: Rayson Ho >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday 10 May 2016 at 01:43
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [tc] supporting Go


Go is a production language used by Google, Dropbox, many many web startups, 
and in fact Fortune 500 companies.

Using a package manager won't buy us anything, and like Clint raised, the Linux 
distros are way too slow in picking up new Go releases. In fact, the standard 
way of installing Rust also does not use a package manager:

https://www.rust-lang.org/downloads.html


> I have nothing against golang in particular but I strongly believe that 
> mixing 2 languages within a project is always the wrong decision

It would be nice if we only need to write code in one language. But in the real 
world the "nicer" & "easier" languages like Python & Perl are also the slower 
ones. I used to work for an investment bank, and our system was developed in 
Perl, with performance critical part rewritten in C/C++, so there really is 
nothing wrong with mixing languages. (But if you ask me, I would strongly 
prefer Go than C++.)

Rayson

I hope that the packaging technologies are considered as part of the TC 
evaluation of a new language. While many alternative approaches are available, 
a language which could not be packaged into RPM or DEB would be an additional 
burden for distro builders and deployers.

Does Go present any additional work compared to Python in this area ?

Tim


==
Open Grid Scheduler - The Official Open Source Grid Engine
http://gridscheduler.sourceforge.net/
http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html




>
> If you want to write code in a language that's not Python, go start another 
> project. Don't call it OpenStack. If it ends up being a better implementation 
> than the reference OpenStack Swift implementation, it will win anyways and 
> perhaps Swift will start to look more like the rest of the projects in 
> OpenStack with a standardized API and multiple plugable implementations.
>
> -Ben Swartzlander
>
>
>> Also worth noting, is that go is not a "language runtime" but a compiler
>> (that happens to statically link in a runtime to the binaries it
>> produces...).
>>
>> The point here though, is that the versions of Python that OpenStack
>> has traditionally supported have been directly tied to what the Linux
>> distributions carry in their repositories (case in point, Python 2.6
>> was dropped from most things as soon as RHEL7 was available with Python
>> 2.7). With Go, there might need to be similar restrictions.
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][osc] Use of openstack client for admin commands

2016-05-04 Thread Tim Bell





On 04/05/16 20:54, "Sean Dague"  wrote:

>On 05/04/2016 02:16 PM, Dean Troyer wrote:
>> On Wed, May 4, 2016 at 12:08 PM, Chris Dent > > wrote:
>> 
>> Since then the spec has been updated to reflect using OSC but the
>> question of whether this is in fact the right place for this style
>> of commands remains open. Not just for this situation, but
>> generally.
>> 
>> 
>> This came up again last week, and there is still no real consensus as to
>> whether admin-only stuff should be included in the repo or kept in
>> plugins. The things already in the OSC repo are likely to stay for the
>> forseeable future, new things could go either way.
>> 
>> If you are planning a separate client/lib, it would make sense to do the
>> OSC plugin as part of that lib.  That is also a chance to get a really
>> clean Python API that doesn't have the cruft of novaclient
>
>I think that in the case of the new "placement" API we really want to
>give it a fresh start. It will be a dedicated endpoint from day one, and
>the CLI interaction with it should definitely live outside of
>novaclient. First, because there is no need to take a lot of the gorp
>from novaclient forward. Second, because we want it really clear from
>day one that this effort is going to split from Nova, and you can use
>this thing even without Nova.
>
>This will, at some level, be pretty core infrastructure. Nova, Neutron,
>Cinder (at the least, I'm sure more will over time) will need to talk to
>it programatically, and administrators may need to do some hand tuning
>of resource pools to express things that are not yet automatically
>discovered and advertised (or that never really make sense to be).
>
>Given these constraints it was as much of an ask as anything else. Can
>OSC handle this? Should it handle it from a best practices perspective?
>How are commands exposed / hidden based on user permissions? The fact
>that we're going to need a service library mean that a dedicated admin
>API might be appropriate?
>
>   -Sean

As we implement nested projects, the ‘admin’ activities become much more
difficult to define. Typical use case would be if I was the project 
administrator
for the ATLAS project. I would want to be able to define new projects such
as “ATLAS Higgs” with appropriate membership and quota within the limits
defined by the cloud administrator.

My understanding from this transition is that the majority of the project
commands would be ‘standard’, and therefore OSC support is needed if
the universal client CLI goal is to be achieved.

Tim
>
>-- 
>Sean Dague
>http://dague.net
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project][quotas][delimiter] Austin Summit - Design Session Summary

2016-05-04 Thread Tim Bell


On 04/05/16 20:35, "Nikhil Komawar" <nik.koma...@gmail.com> wrote:

>
>
>On 5/4/16 2:09 PM, Tim Bell wrote:
>>
>> On 04/05/16 19:41, "Nikhil Komawar" <nik.koma...@gmail.com> wrote:
>>
>>> Thanks for the summary and taking care of the setup, Vilobh!
>>>
>>> Pretty meticulously written on what was agreed at the session. Kudos!
>>>
>>> I wanted to add some points that were asked during the Glance
>>> contributors' meetup:
>>>
>>> * The quota limits will be set on the tables in the service that
>>> maintains the resources for each individual resource (not in keystone).
>>> The default value is what is picked from the config. I think over time
>>> we will come up with implementation detail on how the hierarchical
>>> default value should be set.
>>>
>>> * The quota allocation calculation will be based on the project
>>> hierarchy in consideration (given that driver is being used in such
>>> deployment scenario) and the usage for that resource recorded in the
>>> resource's quota table in that service. So, this will involve
>>> interaction with keystone and within the quota table in the project.
>>>
>>> * We will be working on a migration story separately (outside of the
>>> library). Delimiter does not own the quota limits and usage data so it
>>> will not deal with migrations.
>> Given that Glance does not currently have a quota, it may be possible to use 
>> this as the initial implementation. This would also avoid a later migration 
>> effort.
>>
>> Tim
>>
>>
>
>Thanks for the response Tim. I am of the same opinion but haven't really
>internalized this migration path as I've not been on the deploy-er side
>of managing quota yet. Intuitively migration for quotas seems like a
>terrible experience, possibly going over days?
>
>I presume that nested quota support, if built in from get go, when the
>tables are set is probably the best idea of hierarchical quota support.
>Hoping that's not overly pessimistic assumption.


With Glance not having quotas (and a reasonable set of defaults), I think we 
can make the use of the delimiter functionalities
optional for Glance at the beginning and thus is a much easier use case than 
those who have to convert the existing combinations 
of user/project/nested project quotas for production clouds. A release cycle 
with Glance quotas in production would also encourage
further adoption going forward.

I would like that nested quotas is not a special case as we deploy delimeter, 
just business as usual. There are so many varied use cases and flexibility for 
those who do not need it to make it the standard deployment for delimeter. 
Clearly, existing installs would need to find a migration
path but generally, these would not be nested deployments.

A single oslo library gives a good chance to get a consistent implementation 
across multiple OpenStack components. This would massively 
simplify the operator and resource manager use cases.

Tim


>
>>>
>>> On 5/4/16 1:23 PM, Vilobh Meshram wrote:
>>>> Hi All,
>>>>
>>>> For people who missed the design summit session on Delimiter - Cross
>>>> project Quota enforcement library here is a gist of what we discussed.
>>>> Etherpad [1] captures the details. 
>>>>
>>>> 1. Delimiter will not be responsible for rate-limiting.
>>>> 2. Delimiter will not maintain data for the projects.
>>>> 3. Delimiter will not have the concept of reservations.
>>>> 4. Delimiter will fetch information for project quotas from respective
>>>> projects.
>>>> 5. Delimiter will consolidate utility code for quota related issues at
>>>> common place. For example X, Y, Z companies might have different
>>>> scripts to fix quota issues. Delimiter can be a single place for it
>>>> and the scripts can be more generalized to suit everyones needs.
>>>> 6. The details of project hierarchy is maintained in Keystone but
>>>> Delimiter while making calculations for available/free resource will
>>>> take into consideration whether the project has flat or nested hierarchy.
>>>> 7. Delimiter will rely on the concept of generation-id to guarantee
>>>> sequencing. Generation-id gives a point in time view of resource usage
>>>> in a project. Project consuming delimiter will need to provide this
>>>> information while checking or consuming quota. At present Nova [3] has
>>>> the concept of generation-id.
>>>> 8. Spec [5] will be modified based on the desig

Re: [openstack-dev] [cross-project][quotas][delimiter] Austin Summit - Design Session Summary

2016-05-04 Thread Tim Bell


On 04/05/16 19:41, "Nikhil Komawar"  wrote:

>Thanks for the summary and taking care of the setup, Vilobh!
>
>Pretty meticulously written on what was agreed at the session. Kudos!
>
>I wanted to add some points that were asked during the Glance
>contributors' meetup:
>
>* The quota limits will be set on the tables in the service that
>maintains the resources for each individual resource (not in keystone).
>The default value is what is picked from the config. I think over time
>we will come up with implementation detail on how the hierarchical
>default value should be set.
>
>* The quota allocation calculation will be based on the project
>hierarchy in consideration (given that driver is being used in such
>deployment scenario) and the usage for that resource recorded in the
>resource's quota table in that service. So, this will involve
>interaction with keystone and within the quota table in the project.
>
>* We will be working on a migration story separately (outside of the
>library). Delimiter does not own the quota limits and usage data so it
>will not deal with migrations.

Given that Glance does not currently have a quota, it may be possible to use 
this as the initial implementation. This would also avoid a later migration 
effort.

Tim



>
>
>On 5/4/16 1:23 PM, Vilobh Meshram wrote:
>> Hi All,
>>
>> For people who missed the design summit session on Delimiter - Cross
>> project Quota enforcement library here is a gist of what we discussed.
>> Etherpad [1] captures the details. 
>>
>> 1. Delimiter will not be responsible for rate-limiting.
>> 2. Delimiter will not maintain data for the projects.
>> 3. Delimiter will not have the concept of reservations.
>> 4. Delimiter will fetch information for project quotas from respective
>> projects.
>> 5. Delimiter will consolidate utility code for quota related issues at
>> common place. For example X, Y, Z companies might have different
>> scripts to fix quota issues. Delimiter can be a single place for it
>> and the scripts can be more generalized to suit everyones needs.
>> 6. The details of project hierarchy is maintained in Keystone but
>> Delimiter while making calculations for available/free resource will
>> take into consideration whether the project has flat or nested hierarchy.
>> 7. Delimiter will rely on the concept of generation-id to guarantee
>> sequencing. Generation-id gives a point in time view of resource usage
>> in a project. Project consuming delimiter will need to provide this
>> information while checking or consuming quota. At present Nova [3] has
>> the concept of generation-id.
>> 8. Spec [5] will be modified based on the design summit discussion.
>>
>> If you want to contribute to Delimiter, please join *#openstack-quota. *
>>
>> We have *meetings every Tuesday at 17:00 UTC. *Please join us !
>> *
>> *
>> I am in the process of setting up a new repo for Delimiter. The
>> launchpad page[4] is up.
>>
>>
>> Thanks!
>>
>> -Vilobh
>>
>> [1] Etherpad : https://etherpad.openstack.org/p/newton-quota-library
>> [2] Slides
>> : 
>> http://www.slideshare.net/vilobh/delimiter-openstack-cross-project-quota-library-proposal
>>  
>> [3] https://review.openstack.org/#/c/283253/
>> [4] https://launchpad.net/delimiter
>> [5] Spec : https://review.openstack.org/#/c/284454
>>
>>
>>  
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>-- 
>
>Thanks,
>Nikhil
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Distributed Database

2016-05-04 Thread Tim Bell

On 04/05/16 19:00, "Edward Leafe"  wrote:

>On May 3, 2016, at 7:05 PM, Mark Doffman  wrote:
>
>> This thread has been a depressing read.
>> 
>> I understand that the content is supposed to be distributed databases but 
>> for me it has become an inquisition of cellsV2.
>
>Thanks for bringing this up, and I feel a lot of the responsibility for this 
>direction. To make how I see things clearer, I wrote a follow-up blog post 
>[0], but for those who aren’t inclined to read it, I think that Cells V2 is a 
>great idea and could be very helpful for many deployments. My only concern was 
>the choice of fragmenting the data. I would hope that any further discussion 
>focuses on that.
>
>[0] http://blog.leafe.com/index.php/2016/05/04/mea-culpa-and-clarification/
>
>
>-- Ed Leafe
>
>

From the perspective of an operator who is running more than 30 cells with over 
6,000 hypervisors, we are strongly supporting the Cells V2 work, including 
investing significant effort with the CERN collaboration with BARC in Mumbai. 
The roadmap seems very concrete which is key for us as a production cloud at 
scale.  In particular, inclusion of blocking tests in the gate will be key to 
address current functional limitations such as flavors, server groups and 
security groups with cells v1.

OpenStack provides lots of opportunities for investigating alternative 
approaches but we do need to be careful to not end up with a Horizon Effect 
(https://en.wikipedia.org/wiki/Horizon_effect) where potential new solutions 
cause a postponing of solutions for solid production solution at scale which is 
needed by an increasingly large number of deployments.

Tim
>
>
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-03 Thread Tim Bell
John,

How would Oslo like functionality be included ? Would the aim be to produce 
equivalent libraries ?

Tim




On 03/05/16 17:58, "John Dickinson"  wrote:

>TC,
>
>In reference to 
>http://lists.openstack.org/pipermail/openstack-dev/2016-May/093680.html and 
>Thierry's reply, I'm currently drafting a TC resolution to update 
>http://governance.openstack.org/resolutions/20150901-programming-languages.html
> to include Go as a supported language in OpenStack projects.
>
>As a starting point, what would you like to see addressed in the document I'm 
>drafting?
>
>--John
>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Summit Core Party after Austin

2016-04-21 Thread Tim Bell

On 21/04/16 19:40, "Doug Hellmann"  wrote:

>Excerpts from Thierry Carrez's message of 2016-04-21 18:22:53 +0200:
>> Michael Krotscheck wrote:
>>
>> 
>> So.. while I understand the need for calmer parties during the week, I 
>> think the general trends is to have less parties and more small group 
>> dinners. I would be fine with HPE sponsoring more project team dinners 
>> instead :)
>
>That fits my vision of the new event, which is less focused on big
>glitzy events and more on small socializing opportunities.

At OSCON, I remember some very useful discussions where tables had signs 
showing 
the topics for socializing. While I have appreciated the core reviewers (and 
others)
events, I think there are better formats given the massive expansion of the 
projects
and ecosystem, which reduce the chances for informal discussions.

I remember in the OpenStack Boston summit when there was a table marked 
‘Puppet’ which was one of the most
productive discussions I have had in the OpenStack summits (Thanks Dan :-)

Tim

>
>Doug
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][app-catalog][all] Build unified abstraction for all COEs

2016-04-21 Thread Tim Bell


On 21/04/16 17:38, "Hongbin Lu"  wrote:

>
>
>> -Original Message-
>> From: Adrian Otto [mailto:adrian.o...@rackspace.com]
>> Sent: April-21-16 10:32 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [magnum][app-catalog][all] Build unified
>> abstraction for all COEs
>> 
>> 
>> > On Apr 20, 2016, at 2:49 PM, Joshua Harlow 
>> wrote:
>> >
>> > Thierry Carrez wrote:
>> >> Adrian Otto wrote:
>> >>> This pursuit is a trap. Magnum should focus on making native
>> >>> container APIs available. We should not wrap APIs with leaky
>> >>> abstractions. The lowest common denominator of all COEs is an
>> >>> remarkably low value API that adds considerable complexity to
>> Magnum
>> >>> that will not strategically advance OpenStack. If we instead focus
>> >>> our effort on making the COEs work better on OpenStack, that would
>> >>> be a winning strategy. Support and compliment our various COE
>> ecosystems.
>> >
>> > So I'm all for avoiding 'wrap APIs with leaky abstractions' and
>> > 'making COEs work better on OpenStack' but I do dislike the part
>> about COEs (plural) because it is once again the old non-opinionated
>> problem that we (as a community) suffer from.
>> >
>> > Just my 2 cents, but I'd almost rather we pick one COE and integrate
>> > that deeply/tightly with openstack, and yes if this causes some part
>> > of the openstack community to be annoyed, meh, to bad. Sadly I have a
>> > feeling we are hurting ourselves by continuing to try to be
>> everything
>> > and not picking anything (it's a general thing we, as a group, seem
>> to
>> > be good at, lol). I mean I get the reason to just support all the
>> > things, but it feels like we as a community could just pick something,
>> > work together on figuring out how to pick one, using all these bright
>> > leaders we have to help make that possible (and yes this might piss
>> > some people off, to bad). Then work toward making that something
>> great
>> > and move on…
>> 
>> The key issue preventing the selection of only one COE is that this
>> area is moving very quickly. If we would have decided what to pick at
>> the time the Magnum idea was created, we would have selected Docker. If
>> you look at it today, you might pick something else. A few months down
>> the road, there may be yet another choice that is more compelling. The
>> fact that a cloud operator can integrate services with OpenStack, and
>> have the freedom to offer support for a selection of COE’s is a form of
>> insurance against the risk of picking the wrong one. Our compute
>> service offers a choice of hypervisors, our block storage service
>> offers a choice of storage hardware drivers, our networking service
>> allows a choice of network drivers. Magnum is following the same
>> pattern of choice that has made OpenStack compelling for a very diverse
>> community. That design consideration was intentional.
>> 
>> Over time, we can focus the majority of our effort on deep integration
>> with COEs that users select the most. I’m convinced it’s still too
>> early to bet the farm on just one choice.
>
>If Magnum want to avoid the risk of picking the wrong COE, that mean the risk 
>is populated to all our users. They might pick a COE and explore the its 
>complexities. Then they find out another COE is more compelling and their 
>integration work is wasted. I wonder if we can do better by taking the risk 
>and provide insurance for our users? I am trying to understand the rationales 
>that prevents us to improve the integration between COEs and OpenStack. 
>Personally, I don't like to end up with a situation that "this is the pain 
>from our users, but we cannot do anything".

We’re running Magnum and have requests from our user communities for 
Kubernetes, Docker Swarm and Mesos. The use cases are significantly different 
and can justify the selection of different technologies. We’re offering 
Kubernetes and Docker Swarm now and adding Mesos. If I was only to offer one, 
they’d build their own at considerable cost to them and the IT department.

Magnum allows me to make them all available under the single umbrella of quota, 
capacity planning, identity and resource lifecycle. As experience is gained, we 
may make a recommendation for those who do not have a strong need but I am 
pleased to be able to offer all of them under the single framework.

Since we’re building on the native APIs for the COEs, the effect from the 
operator side to add new engines is really very small (compared to trying to 
explain to the user that they’re wrong in choosing something different from the 
IT department).

BTW, our users also really appreciate using the native APIs.

Some more details at 
http://superuser.openstack.org/articles/openstack-magnum-on-the-cern-production-cloud
 and we’ll give more under the hood details in a further blog.

Tim

>
>> 
>> Adrian
>> 
>> >> I'm with Adrian on that one. 

Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)

2016-04-11 Thread Tim Bell

As we’ve deployed more OpenStack components in production, one of the points we 
have really appreciated is the common areas

- Single pane of glass for Horizon
- Single accounting infrastructure
- Single resource management, quota and admin roles
- Single storage pools with Cinder
- (not quite yet but) common CLI

Building on this, our workflows have simplified

- Lifecycle management (cleaning up when users leave)
- Onboarding (registering for access to the resoures and mapping to the 
appropriate projects)
- Capacity planning (shifting resources, e.g. containers becoming popular 
needing more capacity)

Getting consistent APIs and CLIs is really needed though since the “one 
platform” message is not so easy to explain given the historical decisions, 
such as project vs tenant.

As Subbu has said, the cloud software is one part but there are so many others…

Tim



On 11/04/16 18:08, "Fox, Kevin M"  wrote:

>The more I've used Containers in production the more I've come to the 
>conclusion they are much different beasts then Nova Instances. Nova's 
>abstraction lets Physical hardware and VM's share one common API, and it makes 
>a lot of sense to unify them.
>
>Oh. To be explicit, I'm talking about docker style lightweight containers, not 
>heavy weight containers like LXC ones. The heavy weight ones do work well with 
>Nova. For the rest of the conversation container = lightweight container.
>
>Trove can make use of containers provided there is a standard api in OpenStack 
>for provisioning them. Right now, Magnum provides a way to get Kubernetes 
>orchestrated clusters, for example, but doensn't have good integration with it 
>to hook it into keystone so that Trusts can be used with it on the users 
>behalf for advanced services like Trove. So some pieces are missing. Heat 
>should have a way to have Kubernetes Yaml resources too.
>
>I think the recent request to rescope Kuryr to include non network features is 
>a good step in solving some of the issues.
>
>Unfortunately, it will probably take some time to get Magnum to the point 
>where it can be used by other OpenStack advanced services. Maybe these sorts 
>of issues should be written down and discussed at the upcoming summit between 
>the Magnum and Kuryr teams?
>
>Thanks,
>Kevin
>
>
>
>From: Amrith Kumar [amr...@tesora.com]
>Sent: Monday, April 11, 2016 8:47 AM
>To: OpenStack Development Mailing List (not for usage questions); Allison 
>Randal; Davanum Srinivas; foundat...@lists.openstack.org
>Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One 
>Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)
>
>Monty, Dims,
>
>I read the notes and was similarly intrigued about the idea. In particular, 
>from the perspective of projects like Trove, having a common Compute API is 
>very valuable. It would allow the projects to have a single view of 
>provisioning compute, as we can today with Nova and get the benefit of bare 
>metal through Ironic, VM's through Nova VM's, and containers through 
>nova-docker.
>
>With this in place, a project like Trove can offer database-as-a-service on a 
>spectrum of compute infrastructures as any end-user would expect. Databases 
>don't always make sense in VM's, and while containers are great for quick and 
>dirty prototyping, and VM's are great for much more, there are databases that 
>will in production only be meaningful on bare-metal.
>
>Therefore, if there is a move towards offering a common API for VM's, 
>bare-metal and containers, that would be huge.
>
>Without such a mechanism, consuming containers in Trove adds considerable 
>complexity and leads to a very sub-optimal architecture (IMHO). FWIW, a 
>working prototype of Trove leveraging Ironic, VM's, and nova-docker to 
>provision databases is something I worked on a while ago, and have not 
>revisited it since then (once the direction appeared to be Magnum for 
>containers).
>
>With all that said, I don't want to downplay the value in a container specific 
>API. I'm merely observing that from the perspective of a consumer of computing 
>services, a common abstraction is incredibly valuable.
>
>Thanks,
>
>-amrith
>
>> -Original Message-
>> From: Monty Taylor [mailto:mord...@inaugust.com]
>> Sent: Monday, April 11, 2016 11:31 AM
>> To: Allison Randal ; Davanum Srinivas
>> ; foundat...@lists.openstack.org
>> Cc: OpenStack Development Mailing List (not for usage questions)
>> 
>> Subject: Re: [openstack-dev] [OpenStack Foundation] [board][tc][all] One
>> Platform – Containers/Bare Metal? (Re: Board of Directors Meeting)
>>
>> On 04/11/2016 09:43 AM, Allison Randal wrote:
>> >> On Wed, Apr 6, 2016 at 1:11 PM, Davanum Srinivas 
>> wrote:
>> >>> Reading unofficial notes [1], i found one topic very interesting:
>> >>> One Platform – How do we truly support containers and 

Re: [openstack-dev] [all][elections] Results of the TC Election

2016-04-08 Thread Tim Bell

On 08/04/16 20:08, "gordon chung"  wrote:

>
>
>On 08/04/2016 9:14 AM, Thierry Carrez wrote:
>> Eoghan Glynn wrote:
 However, the turnout continues to slide, dipping below 20% for
 the first time:
>>>
>>>Election | Electorate (delta %) | Votes | Turnout (delta %)
>>>===
>>>Oct '13  | 1106 | 342   | 30.92
>>>Apr '14  | 1510(+36.52)  | 448   | 29.69   (-4.05)
>>>Oct '14  | 1893   (+25.35)  | 506   | 26.73   (-9.91)
>>>Apr '15  | 2169   (+14.58)  | 548   | 25.27   (-5.48)
>>>Oct '15  | 2759   (+27.20)  | 619   | 22.44   (-11.20)
>>>Apr '16  | 3284   (+19.03)  | 652   | 19.85   (-11.51)
>>>

 This ongoing trend of a decreasing proportion of the electorate
 participating in TC elections is a concern.
>>
>> One way to look at it is that every cycle (mostly due to the habit of
>> giving summit passes to recent contributors) we have more and more
>> one-patch contributors (more than 600 in Mitaka), and those usually are
>> not really interested in voting... So the electorate number is a bit
>> inflated, resulting in an apparent drop in turnout.
>>
>> It would be interesting to run the same analysis but taking only >=3
>> patch contributors as "expected voters" and see if the turnout still
>> drops as much.
>>
>> Long term I'd like to remove the summit pass perk (or no longer link it
>> to "one commit"). It will likely result in a drop in contributors
>> numbers (gasp), but a saner electorate.
>>
>
>just for reference, while only affecting a subset of the electorate, if 
>you look at the PTL elections, they all had over 40% turnout (even the 
>older and larger projects).
>
>it may be because of those with "one commit", but if that were the case, 
>you would think the turnout would be inline/similar to the PTL elections.

It could also be that the projects with the lower hanging fruit were 
uncontested.

BTW, I don’t feel that a 1 commit person should be ignored in the voting. Many 
of us
have roles which do not mean we spend all the time committing but when we see 
something
wrong in the docs, spend a few hours to go through the process. I certainly 
favor
those PTLs/TC members in my voting who still count the sum of this contribution 
to be significant.

As we progress further with the UC-Recognition activity, there will be further 
discussion
on this so I feel waiting for that work to make a proposal on how those people 
could also
contribute to setting the technical direction.

Tim

>
>cheers,
>-- 
>gord
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [all] FYI: Removing default flavors from nova

2016-04-06 Thread Tim Bell

On 06/04/16 19:28, "Fox, Kevin M"  wrote:

>+1
>
>From: Neil Jerram [neil.jer...@metaswitch.com]
>Sent: Wednesday, April 06, 2016 10:15 AM
>To: OpenStack Development Mailing List (not for usage questions)
>Subject: Re: [openstack-dev] [nova] [all] FYI: Removing default flavors from 
>nova
>
>I hesitate to write this, even now, but I do think that OpenStack has a
>problem with casual incompatibilities, such as this appears to be.  But,
>frankly, I've been slapped down for expressing my opinion in the past
>(on the pointless 'tenant' to 'project' change), so I just quietly
>despaired when I saw that ops thread, rather than saying anything.
>
>I haven't researched this particular case in detail, so I could be
>misunderstanding its implications.  But in general my impression, from
>the conversations that occur when these topics are raised, is that many
>prominent OpenStack developers do not care enough about
>release-to-release compatibility.  The rule for incompatible changes
>should be "Just Don't", and I believe that if everyone internalized
>that, they could easily find alternative approaches without breaking
>compatibility.
>
>When an incompatible change like this is made, imagine the 1000s of
>operators and users around the world, with complex automation around
>OpenStack, who see their deployment or testing failing, spend a couple
>of hours debugging, and eventually discover 'oh, they removed m1.small'
>or 'oh, they changed the glance command line'.  Given that hassle and
>bad feeling, is the benefit that developers get from the incompatibility
>still worth it?

I have rarely seen the operator community so in agreement as on this change 
impact.
Over the past 4 years, there have been lots of changes which were debated
with major impacts on end users (EC2, nova-network, …). However, I do not 
believe 
that this is one of those:

This change

- does not break existing clouds
- has a simple 5 line shell script to cover the new cloud install and can
be applied before opening the cloud to the end users
- raises a fundamental compatibility question to be solved by the community

What I’d like to replace it is a generic query along the lines of

give me a flavor with X GB RAM, Y cores, Z system disk and the metadata flags 
so I
get a GCPU and ideally huge pages

There is a major difference from option dropped or major functionality 
deprecated
as I can hide it from my end users with a few flavor definitions which make 
sense 
for my cloud.

Incompatible changes for existing production deployments, e.g. CLIs,  should be 
handled very
carefully. Cleaning up some past choices for new clouds with appropriate 
documentation
and workarounds to keep the old behaviour seems reasonable.

>
>I would guess there are many others like me, who generally don't say
>anything because they've already observed that the prevailing sentiment
>is not sufficiently on the side of compatibility.

We have a production cloud with 2,200 users who feel the pain of incompatible 
change (and
pass that on to the support teams :-) I feel there is a strong distinction 
between 
incompatible change (i.e. you cannot hide this from your end users) vs change 
with a workaround
(where you can do some work for some projects to emulate the prior environment, 
but new projects
can be working with the future only, not accidentally selecting the legacy 
options).

I do feel that people should be able raise their concerns, each environment is 
different and
there is no single scenario. Thus, a debate, such as this one, is valuable to 
find the balance
between the need to move forward versus the risks. 

Tim

>
>Neil
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] FYI: Removing default flavors from nova

2016-04-06 Thread Tim Bell

On 06/04/16 18:36, "Daniel P. Berrange"  wrote:

>On Wed, Apr 06, 2016 at 04:29:00PM +, Fox, Kevin M wrote:
>> It feels kind of like a defcore issue though. Its harder for app
>> developers to create stuff like heat templates intended for cross
>> cloud that recommend a size, m1.small, without a common reference.
>
>Even with Nova defining these default flavours, it didn't do anything
>to help solve this problem as all the public cloud operators were
>just deleting these flavours & creating their own. So it just gave
>people a false sense of standardization where none actually existed.
>

The problem is when the clouds move to m2.*, m3.* etc. and deprecate
old hardware on m1.*.

I think Heat needs more of an query engine along the lines of “give me a
flavor with at least X cores and Y GB RAM” rather than hard coding m1.large.
Core performance is another parameter that would be interesting to select,
“give me a core with at least 5 bogomips”

I don’t see how flavor names could be standardised in the long term.

Tim

>
>Regards,
>Daniel
>-- 
>|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
>|: http://libvirt.org  -o- http://virt-manager.org :|
>|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
>|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ec2-api] EC2 API Future

2016-03-21 Thread Tim Bell

On 21/03/16 17:23, "Doug Hellmann" <d...@doughellmann.com> wrote:

>
>
>> On Mar 20, 2016, at 3:26 PM, Tim Bell <tim.b...@cern.ch> wrote:
>> 
>> 
>> Doug,
>> 
>> Given that the EC2 functionality is currently in use by at least 1/6th of 
>> production clouds 
>> (https://www.openstack.org/assets/survey/Public-User-Survey-Report.pdf page 
>> 34), this is a worrying situation.
>
>I completely agree. 
>
>> 
>> The EC2 functionality was recently deprecated from Nova on the grounds that 
>> the EC2 API project was the correct way to proceed. With the proposal now to 
>> not have an EC2 API project at all, this will leave many in the community 
>> confused.
>
>That wasn't the proposal. We have lots of unofficial projects. My suggestion 
>was that if the EC2 team wasn't participating in the community governance 
>process, we should not list them as official. That doesn't mean disbanding the 
>project, just updating our reference materials to reflect reality and clearly 
>communicat expectations. It sounds like that was a misunderstanding which has 
>been cleared up, though, so I think we're all set to continue considering it 
>an official project. 

There is actually quite a lot of activity going on to get the EC2 API to an 
easy state to deploy. CERN has been involved in the puppet-ec2api and RDO 
packaging which currently does not count as participation in the EC2 API 
project given the split of repositories. However, it is critical for deployment 
of a project that it can be installed and configured.

Tim

>
>Doug
>
>> 
>> Tim
>> 
>> 
>> 
>> 
>>> On 20/03/16 17:48, "Doug Hellmann" <d...@doughellmann.com> wrote:
>>> 
>>> ...
>>> 
>>> The EC2-API project doesn't appear to be very actively worked on.
>>> There is one very recent commit from an Oslo team member, another
>>> couple from a few days before, and then the next one is almost a
>>> month old. Given the lack of activity, if no team member has
>>> volunteered to be PTL I think we should remove the project from the
>>> official list for lack of interest.
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ec2-api] EC2 API Future

2016-03-20 Thread Tim Bell

Doug,

Given that the EC2 functionality is currently in use by at least 1/6th of 
production clouds 
(https://www.openstack.org/assets/survey/Public-User-Survey-Report.pdf page 
34), this is a worrying situation.

The EC2 functionality was recently deprecated from Nova on the grounds that the 
EC2 API project was the correct way to proceed. With the proposal now to not 
have an EC2 API project at all, this will leave many in the community confused.

Tim




On 20/03/16 17:48, "Doug Hellmann"  wrote:

>...
>
>The EC2-API project doesn't appear to be very actively worked on.
>There is one very recent commit from an Oslo team member, another
>couple from a few days before, and then the next one is almost a
>month old. Given the lack of activity, if no team member has
>volunteered to be PTL I think we should remove the project from the
>official list for lack of interest.
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] Wishlist bugs == (trivial) blueprint?

2016-03-19 Thread Tim Bell


On 17/03/16 18:29, "Sean Dague"  wrote:

>On 03/17/2016 11:57 AM, Markus Zoeller wrote:
>
>> Suggested action items:
>> 
>> 1. I close the open wish list items older than 6 months (=138 reports)
>>and explain in the closing comment that they are outdated and the 
>>ML should be used for future RFEs (as described above).
>> 2. I post on the openstack-ops ML to explain why we do this
>> 3. I change the Nova bug report template to explain this to avoid more
>>RFEs in the bug report list in the future.
>> 4. In 6 months I double-check the rest of the open wishlist bugs
>>if they found developers, if not I'll close them too.
>> 5. Continously double-check if wishlist bug reports get created
>>
>> Doubts? Thoughts? Concerns? Agreements?
>
>This sounds like a very reasonable plan to me. Thanks for summarizing
>all the concerns and coming up with a pretty balanced plan here. +1.
>
>   -Sean

I’d recommend running it by the -ops* list along with the RFE proposal. I think 
many of the cases
had been raised since people did not have the skills/know how to proceed.

Engaging with the ops list would also bring in the product working group who 
could potentially
help out on the next step (i.e. identifying the best places to invest for RFEs) 
and the other
topical working groups (e.g. Telco, scientific) who could help with 
prioritisation/triage.

I don’t think that a launchpad account on its own is a big problem. Thus, I 
could also see an approach
where a blueprint was created in launchpad with some reasonably structured set 
of chapters. My
personal experience was that the challenges came more later on trying to get 
the review matched up and
the right bp directories.

There is a big benefit to good visibility in the -ops community for RFEs 
though. Quite often, the
features are implemented but people did not know how to find them in the doc 
(or maybe its a doc bug).
Equally, the OSops scripts repo can give people workarounds while the requested 
feature is in the
priority queue.

It would be a very interesting topic to kick off in the ops list and then have 
a further review in
Austin to agree how to proceed.

Tim
>
>-- 
>Sean Dague
>http://dague.net
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum] High Availability

2016-03-19 Thread Tim Bell

From: Hongbin Lu >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Saturday 19 March 2016 at 04:52
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [magnum] High Availability

...
If you disagree, I would request you to justify why this approach works for 
Heat but not for Magnum. Also, I also wonder if Heat has a plan to set a hard 
dependency on Barbican for just protecting the hidden parameters.


There is a risk that we use decisions made by other projects to justify how 
Magnum is implemented. Heat was created 3 years ago according to 
https://www.openstack.org/software/project-navigator/ and Barbican only 2 years 
ago, thus Barbican may not have been an option (or a high risk one).

Barbican has demonstrated that the project has corporate diversity and good 
stability 
(https://www.openstack.org/software/releases/liberty/components/barbican). 
There are some areas that could be improved (packaging and puppet modules are 
often needing some more investment).

I think it is worth a go to try it out and have concrete areas to improve if 
there are problems.

Tim

If you don’t like code duplication between Magnum and Heat, I would suggest to 
move the implementation to a oslo library to make it DRY. Thoughts?

[1] 
https://specs.openstack.org/openstack/heat-specs/specs/juno/encrypt-hidden-parameters.html

Best regards,
Hongbin

From: David Stanek [mailto:dsta...@dstanek.com]
Sent: March-18-16 4:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability


On Fri, Mar 18, 2016 at 4:03 PM Douglas Mendizábal 
> 
wrote:
[snip]
>
> Regarding the Keystone solution, I'd like to hear the Keystone team's 
> feadback on that.  It definitely sounds to me like you're trying to put a 
> square peg in a round hole.
>

I believe that using Keystone for this is a mistake. As mentioned in the 
blueprint, Keystone is not encrypting the data so magnum would be on the hook 
to do it. So that means that if security is a requirement you'd have to 
duplicate more than just code. magnum would start having a larger security 
burden. Since we have a system designed to securely store data I think that's 
the best place for data that needs to be secure.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cross-project] [all] Quotas -- service vs. library

2016-03-16 Thread Tim Bell
On 16/03/16 07:25, "Nikhil Komawar"  wrote:



>Hello everyone,
>
>tl;dr;
>I'm writing to request some feedback on whether the cross project Quotas
>work should move ahead as a service or a library or going to a far
>extent I'd ask should this even be in a common repository, would
>projects prefer to implement everything from scratch in-tree? Should we
>limit it to a guideline spec?
>
>But before I ask anymore, I want to specifically thank Doug Hellmann,
>Joshua Harlow, Davanum Srinivas, Sean Dague, Sean McGinnis and  Andrew
>Laski for the early feedback that has helped provide some good shape to
>the already discussions.
>
>Some more context on what the happenings:
>We've this in progress spec [1] up for providing context and platform
>for such discussions. I will rephrase it to say that we plan to
>introduce a new 'entity' in the Openstack realm that may be a library or
>a service. Both concepts have trade-offs and the WG wanted to get more
>ideas around such trade-offs from the larger community.
>
>Service:
>This would entail creating a new project and will introduce managing
>tables for quotas for all the projects that will use this service. For
>example if Nova, Glance, and Cinder decide to use it, this 'entity' will
>be responsible for handling the enforcement, management and DB upgrades
>of the quotas logic for all resources for all three projects. This means
>less pain for projects during the implementation and maintenance phase,
>holistic view of the cloud and almost a guarantee of best practices
>followed (no clutter or guessing around what different projects are
>doing). However, it results into a big dependency; all projects rely on
>this one service for right enforcement, avoiding races (if do not
>incline on implementing some of that in-tree) and DB
>migrations/upgrades. It will be at the core of the cloud and prone to
>attack vectors, bugs and margin of error.

This has been proposed a number of times in the past with projects such as Boson
(https://wiki.openstack.org/wiki/Boson) and an extended discussion at one of the
summits (I think it was San Diego).

Then, there were major reservations from the PTLs at the impacts in terms of
latency, ability to reconcile and loss of control (transactions are difficult, 
transactions
across services more so).

>Library:
>A library could be thought of in two different ways:
>1) Something that does not deal with backed DB models, provides a
>generic enforcement and management engine. To think ahead a little bit
>it may be a ABC or even a few standard implementation vectors that can
>be imported into a project space. The project will have it's own API for
>quotas and the drivers will enforce different types of logic; per se
>flat quota driver or hierarchical quota driver with custom/project
>specific logic in project tree. Project maintains it's own DB and
>upgrades thereof.
>2) A library that has models for DB tables that the project can import
>from. Thus the individual projects will have a handy outline of what the
>tables should look like, implicitly considering the right table values,
>arguments, etc. Project has it's own API and implements drivers in-tree
>by importing this semi-defined structure. Project maintains it's own
>upgrades but will be somewhat influenced by the common repo.
>
>Library would keep things simple for the common repository and sourcing
>of code can be done asynchronously as per project plans and priorities
>without having a strong dependency. On the other hand, there is a
>likelihood of re-implementing similar patterns in different projects
>with individual projects taking responsibility to keep things up to
>date. Attack vectors, bugs and margin of error are project responsibilities
>
>Third option is to avoid all of this and simply give guidelines, best
>practices, right packages to each projects to implement quotas in-house.
>Somewhat undesirable at this point, I'd say. But we're all ears!

I would favor a library, at least initially. If we cannot agree on a library, it
is unlikely that we can get a service adopted (even if it is desirable).

A library (along the lines of 1 or 2 above) would allow consistent 
implementation
of nested quotas and user quotas. Nested quotas is currently only implemented
in Cinder and user quota implementations vary between projects which is 
confusing.

Now that we have Oslo (there was no similar structure when it was first 
discussed),
we have the possibility to implement these concepts in a consistent way across
OpenStack and give a better user experience as a result.

Tim

>
>Thank you for reading and I anticipate more feedback.
>
>[1] https://review.openstack.org/#/c/284454/
>
>-- 
>
>Thanks,
>Nikhil
>
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [nova] Wishlist bugs == (trivial) blueprint?

2016-03-15 Thread Tim Bell

On 15/03/16 16:21, "Markus Zoeller"  wrote:

>Sean Dague  wrote on 03/15/2016 02:52:47 PM:
>
>> From: Sean Dague 
>> To: openstack-dev@lists.openstack.org
>> Date: 03/15/2016 02:53 PM
>> Subject: Re: [openstack-dev] [nova] Wishlist bugs == (trivial) 
>blueprint?
>> 
>> On 03/15/2016 09:37 AM, Chris Dent wrote:
>> > On Tue, 15 Mar 2016, Markus Zoeller wrote:
>> > 
>> >> Long story short, I'm in favor of abandoning the use of "wishlist"
>> >> as an importance in bug reports to track requests for enhancements.
>> > 
>> > While I'm very much in favor of limiting the amount of time issues
>> > (of any sort) linger in launchpad[1] I worry that if we stop making
>> > "wishlist" available as an option then people who are not well
>> > informed about the complex system for achieving features in Nova
>> > will have no medium to get their ideas into the system. We want
>> > users to sometime be able to walk up, drop an idea and move on without
>> > having to be responsible for actually doing that work. If we insist
>> > that such ideas must go through the blueprint process then most
>> > ideas will be left unstated.
>> 
>> I believe that 0% of such drive by wishlist items ever get implemented.
>> I also think they mostly don't even get ACKed until 6 or 12 months after
>> submission. So It's not really a useful feedback channel.
>> 
>> So I'm pro just closing Wishlist items as Opinion and moving on.
>> Probably with some boiler plate around it about submission guidelines
>> for making a lightweight blueprint.
>
>A few more specific numbers which could help to make a decission.
>Open wishlist bug reports older than:
>>   now: 146
>>  6 months: 141
>> 12 months: 122
>
>Wishlist bug reports in progress: 9
>Wishlist bug reports implemented:
>46 (last 24 months)
>25 (last 18 months)
>19 (last 12 months)
> 5 (last  6 months)
>
>Based on that it seems to me that it is not a very successful 
>channel to get ideas implemented(!). The dropping is easy though.
>
>Based on that I'm very much in favor of the agressive choice to close 
>the remaining 146 wishlist bugs with a comment which explains how to
>go on from there (using backlog specs).


The bug process was very light weight for an operator who found something they 
would like enhanced. It could be done through the web and did not require 
git/gerrit knowledge. I went through the process for a change:

- Reported a bug for the need to add an L2 cache size option for QEMU 
(https://bugs.launchpad.net/nova/+bug/1509304) closed as invalid since this was 
a feature request
- When this was closed, I followed the process and submitted a spec 
(https://blueprints.launchpad.net/nova/+spec/qcow2-l2-cache-size-configuration)

It was not clear how to proceed from here for me. 

The risk I see is that we are missing input to the development process in view 
of the complexity of submitting those requirements. Clearly, setting the bar 
too low means that there is no clear requirement statement etc. However, I 
think the combination of tools and assumption of knowledge of the process means 
that we are missing the opportunity for good quality input.

Many of these are low hanging fruit improvements which could be used to bring 
developers into the community if we can find a good way to get the input and 
match it with the resource to implement.

Tim

>
>> > What I think we need to do instead is fix this problem:
>> > 
>> >> * we don't have a process to transform wishlist bugs to blueprints
>> > 
>> > such that we do have a process of some kind where a wishlist idea
>> > either gets an owner who starts the blueprint process (because it is
>> > just that cool) or dies from lack of attention.
>> > 
>> > It's clear, though, that we already have a huge debt in bug/issue
>> > management so adding yet another task is hard to contemplate.
>> > 
>> > I think we can address some of that by more quickly expiring bugs
>> > that have had no recent activity or attention, on the assumption
>> > that:
>> > 
>> > * They will come back up again if they are good ideas or real bugs.
>> > * Lack of attention is a truthy signal of either lack of resources or 
>lack
>> >   of importance.
>> > 
>> > What needs to happen is that fewer things which are not actionable
>> > or nobody is interested in show up when traversing the bugs looking
>> > for something to work on.
>> > 
>> > I'm happy to help some of this become true, in part because of [1]
>> > below.
>> > 
>> > [1] I've recently spent a bit of time chasing bugs tagged
>> > "scheduler" and far too many of them are so old that it's impossible
>> > to tell whether they matter any more, and many of them are confused
>> > by patches and people who have gone in and out of existence. It's
>> > challenging to tease out what can be done and the information has
>> > very little archival value. It should go off the radar. Having a
>> > bunch of stuff that looks like it needs to be done but 

Re: [openstack-dev] [all][zaqar][cloudkitty] Default ports list

2016-03-10 Thread Tim Bell


From: Sylvain Bauza >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Thursday 10 March 2016 at 10:04
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [all][zaqar][cloudkitty] Default ports list



Le 09/03/2016 23:41, Matt Fischer a écrit :
This is not the first time. Monasca and Murano had a collision too[1]. When 
this happens the changes trickle down into automation tools also and 
complicates things.

[1] https://bugs.launchpad.net/murano/+bug/1505785


IMHO, all that info has to be standardized in the Service Catalog. That's where 
endpoint informations can be found for a specific service type and that's the 
basement for cross-project communication.

FWIW, there is one cross-project spec trying to clean-up the per-project bits 
that are not common 
https://github.com/openstack/openstack-specs/blob/master/specs/service-catalog.rst

I'm torn between 2 opinions :
 - either we consider that all those endpoints are (or should be - for those 
which aren't) manageable thru config options, and thus that's not a problem we 
should solve. Any operator can then modify the ports to make sure that two 
conflicting big-tent projects can work together.
 - or, we say that it can be a concern for interoperability, and then we should 
somehow ensure that all projects can work together. Then, a documentation link 
isn't enough IMHO, we should rather test that.


If we can make it so that there are reasonable port commonalities between 
OpenStack clouds, this would be good. Clearly, the service catalog is the 
master so I don’t think there is an interoperability concern but having each of 
the projects using different ports would simplify some of the smaller 
configurations with multiple services on a single box.

This does assume that there are less big tent projects than available TCP/IP 
ports :-)

Tim





On Wed, Mar 9, 2016 at 3:30 PM, Xav Paice 
> wrote:
From an ops point of view, this would be extremely helpful information to share 
with various teams around an organization.  Even a simple wiki page would be 
great.

On 10 March 2016 at 10:35, Fei Long Wang 
> wrote:
Hi all,

Yesterday I just found cloudkitty is using the same default port () which 
is used by Zaqar now. So I'm wondering if there is any rule/policy for those 
new services need to be aware. I googled but can't find anything about this. 
The only link I can find is 
http://docs.openstack.org/liberty/config-reference/content/firewalls-default-ports.html.
 So my question is should we document the default ports list on an official 
place given the big tent mode? Thanks.

--
Cheers & Best regards,
Fei Long Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
--


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone] Using multiple token formats in a one openstack cloud

2016-03-08 Thread Tim Bell

From: Matt Fischer >
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>
Date: Tuesday 8 March 2016 at 20:35
To: "OpenStack Development Mailing List (not for usage questions)" 
>
Subject: Re: [openstack-dev] [keystone] Using multiple token formats in a one 
openstack cloud

I don't think your example is right: "PKI will validate that token without 
going to any keystone server". How would it track revoked tokens? I'm pretty 
sure that they still get validated, they are stored in the DB even.

I also disagree that there are different use cases. Just switch to fernet and 
save yourself what's going to be weeks of pain with probably no improvement in 
anything with this idea.

Is there any details on how to switch to Fernet for a running cloud ? I can see 
a migration path where the cloud is stopped, the token format changed and the 
cloud restarted.

It seems more complex (and maybe insane, as Adam would say) to do this for a 
running cloud without disturbing the users of the cloud.

Tim

On Tue, Mar 8, 2016 at 9:56 AM, rezroo 
> wrote:
The basic idea is to let the openstack clients decide what sort of token 
optimization to use - for example, while a normal client uses uuid tokens, some 
services like heat or magnum may opt for pki tokens for their operations. A 
service like nova, configured for PKI will validate that token without going to 
any keystone server, but if it gets a uuid token then validates it with a 
keystone endpoint. I'm under the impression that the different token formats 
have different use-cases, so am wondering if there is a conceptual reason why 
multiple token formats are an either/or scenario.


On 3/8/2016 8:06 AM, Matt Fischer wrote:
This would be complicated to setup. How would the Openstack services validate 
the token? Which keystone node would they use? A better question is why would 
you want to do this?

On Tue, Mar 8, 2016 at 8:45 AM, rezroo 
> wrote:
Keystone supports both tokens and ec2 credentials simultaneously, but as far as 
I can tell, will only do a single token format (uuid, pki/z, fernet) at a time. 
Is it possible or advisable to configure keystone to issue multiple token 
formats? For example, I could configure two keystone servers, each using a 
different token format, so depending on endpoint used, I could get a uuid or 
pki token. Each service can use either token format, so is there a conceptual 
or implementation issue with this setup?
Thanks,
Reza

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] config options help text improvement: current status

2016-03-02 Thread Tim Bell

Great. Does this additional improved text also get into the configuration guide 
documentation somehow ? 



Tim

On 02/03/16 18:45, "Markus Zoeller"  wrote:

>TL;DR: From ~600 nova specific config options are:
>~140 at a central location with an improved help text
>~220 options in open reviews (currently on hold)
>~240 options todo
>
>
>Background
>==
>Nova has a lot of config options. Most of them weren't well
>documented and without looking in the code you probably don't
>understand what they do. That's fine for us developers but the ops
>had more problems with the interface we provide for them [1]. After
>the Mitaka summit we came to the conclusion that this should be 
>improved, which is currently in progress with blueprint [2].
>
>
>Current Status
>==
>After asking on the ML for help [3] the progress improved a lot. 
>The goal is clear now and we know how to achieve it. The organization 
>is done via [4] which also has a section of "odd config options". 
>This section is important for a later step when we want do deprecate 
>config options to get rid of unnecessary ones. 
>
>As we reached the Mitaka-3 milestone we decided to put the effort [5] 
>on hold to stabilize the project and focus the review effort on bug 
>fixes. When the Newton cycle opens, we can continue the work. The 
>current result can be seen in the sample "nova.conf" file generated 
>after each commit [6]. The appendix at the end of this post shows an
>example.
>
>All options we have will be treated that way and moved to a central
>location at "nova/conf/". That's the central location which hosts
>now the interface to the ops. It's easier to get an overview now.
>The appendix shows how the config options were spread at the beginning
>and how they are located now.
>
>I initially thought that we have around 800 config options in Nova
>but I learned meanwhile that we import a lot from other libs, for 
>example from "oslo.db" and expose them as Nova options. We have around
>600 Nova specific config options, and ~140 are already treaded like
>described above and ca. 220 are in the pipeline of open reviews.
>Which leaves us ~240 which are not looked at yet.
>
>
>Outlook
>===
>The numbers of the beginning of this ML post make me believe that we
>can finish the work in the upcoming Newton cycle. "Finished" means
>here: 
>* all config options we provide to our ops have proper and usable docs
>* we have an understanding which options don't make sense anymore
>* we know which options should get stronger validation to reduce errors
>
>I'm looking forward to it :)
>
>
>Thanks
>==
>I'd like to thank all the people who are working on this and making
>this possible. A special thanks goes to Ed Leafe, Esra Celik and
>Stephen Finucane. They put a tremendous amount of work in it.
>
>
>References:
>===
>[1] 
>http://lists.openstack.org/pipermail/openstack-operators/2016-January/009301.html
>[2] https://blueprints.launchpad.net/nova/+spec/centralize-config-options
>[3] 
>http://lists.openstack.org/pipermail/openstack-dev/2015-December/081271.html
>[4] https://etherpad.openstack.org/p/config-options
>[5] Gerrit reviews for this topic: 
>https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/centralize-config-options
>[6] The sample config file which gets generated after each commit:
>http://docs.openstack.org/developer/nova/sample_config.html
>
>
>Appendix
>
>
>Example of the help text improvement
>---
>As an example, compare the previous documentation of the scheduler 
>option "scheduler_tracks_instance_changes". 
>Before we started:
>
># Determines if the Scheduler tracks changes to instances to help 
># with its filtering decisions. (boolean value)
>#scheduler_tracks_instance_changes = true
>
>After the improvement:
>
># The scheduler may need information about the instances on a host 
># in order to evaluate its filters and weighers. The most common 
># need for this information is for the (anti-)affinity filters, 
># which need to choose a host based on the instances already running
># on a host.
>#
># If the configured filters and weighers do not need this information,
># disabling this option will improve performance. It may also be 
># disabled when the tracking overhead proves too heavy, although 
># this will cause classes requiring host usage data to query the 
># database on each request instead.
>#
># This option is only used by the FilterScheduler and its subclasses;
># if you use a different scheduler, this option has no effect.
>#
># * Services that use this:
>#
># ``nova-scheduler``
>#
># * Related options:
>#
># None
>#  (boolean value)
>#scheduler_tracks_instance_changes = true
>
>
>The spread of config options in the tree

  1   2   3   >