[openstack-dev] [neutron] functional job broken by eventlet 0.20.1

2017-03-20 Thread Ihar Hrachyshka
Hi all,

FYI we were broken by the new eventlet. The fix is at:
https://review.openstack.org/#/c/447817/

Hopefully we can find reviewers in EMEA/APAC time zones to groom and land it.

Thanks,
Ihar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer]Can't find meter anywhere with ceilometer post REST API

2017-03-20 Thread Hui Xiang
Thanks gordon for your info.

The reason why not using gnocchi in mitaka is that we are using
collectd-ceilometer-plugin[1] to posting samples to ceilometer through
ceilometer-api, after mitaka yes we will all move to gnocchi.


"""
when posting samples to ceilometer-api, the data goes through
pipeline before being stored. therefore, you need notification-agent
enabled AND you need to make sure the pipeline.yaml accepts the meter.
"""
As the samples posted doesn't have event_type, so I guess you mean I don't
need to edit the event_pipeline.yaml, but need to edit the pipeline.yaml to
accepts the meter. Could you kindly check whether below simple example make
sense to accept the meter?  Does the source name need to match the source
field in the sample or it can be defined as anyone.

> [{"counter_name": "interface.if_errors",
>   "user_id": "5457b977c25e4498a31a3c1c78829631",
>   "resource_id": "localhost-ovs-system",
>   "timestamp": "2017-03-17T02:26:46",
>   "resource_metadata": {},
>   "source": *"5b1525a8eb2d4739a83b296682aed023:collectd*",
>   "counter_unit": "Errors/s",
>   "counter_volume": 0.0,
>   "project_id": "5b1525a8eb2d4739a83b296682aed023",
>   "message_id": "2b4ce294-0ab9-11e7-8058-026ea687824d",
>   "counter_type": "delta"},
>


sources:
- name: *meter_source*
  interval: 60
  meters:
  - "interface.if_errors"
  sinks:
  - meter_sink

sinks:
- name: meter_sink
  transformers:
  publishers:
  - notifier://


Does the source name need to matching the source field in the sample or it
can be defined as any.

[1]. https://github.com/openstack/collectd-ceilometer-plugin


Thanks.
Hui.


On Tue, Mar 21, 2017 at 4:21 AM, gordon chung  wrote:

>
>
> On 18/03/17 04:54 AM, Hui Xiang wrote:
> > Hi folks,
> >
> >   I am trying to post samples from third part software to ceilometer via
> > the REST API as below with Mitaka version. I can see ceilometer-api has
> > received this post, and seems forwarded to ceilometer notification agent
> > through RMQ.
> >
>
> first and most importantly, the ceilometer-api is deprecated and not
> supported upstream anymore. please use gnocchi for proper time series
> storage (or whatever storage solution you feel comfortable with)
>
> >
> > 2. LOG
> > 56:17] "*POST /v2/meters/interface.if_packets HTTP/1.1*" 201 -
> > 2017-03-17 16:56:17.378 52955 DEBUG oslo_messaging._drivers.amqpdriver
> > [req-1c4ea84d-ea53-4518-81ea-6c0bffa9745d
> > 5457b977c25e4498a31a3c1c78829631 5b1525a8eb2d4739a83b296682aed023 - - -]
> > CAST unique_id: 64a6bae3bbcc4b7dab4dceb13cf7f81b NOTIFY exchange
> > 'ceilometer' topic 'notifications.sample' _send
> > /usr/lib/python2.7/site-packages/oslo_messaging/_
> drivers/amqpdriver.py:438
> > 2017-03-17 16:56:17.382 52955 INFO werkzeug
> > [req-1c4ea84d-ea53-4518-81ea-6c0bffa9745d
> > 5457b977c25e4498a31a3c1c78829631 5b1525a8eb2d4739a83b296682aed023 - - -]
> > 192.168.0.3 - - [17/Mar/2017
> >
> >
> > 3. REST API return result
> > [{"counter_name": "interface.if_errors",
> >   "user_id": "5457b977c25e4498a31a3c1c78829631",
> >   "resource_id": "localhost-ovs-system",
> >   "timestamp": "2017-03-17T02:26:46",
> >   "resource_metadata": {},
> >   "source": "5b1525a8eb2d4739a83b296682aed023:collectd",
> >   "counter_unit": "Errors/s",
> >   "counter_volume": 0.0,
> >   "project_id": "5b1525a8eb2d4739a83b296682aed023",
> >   "message_id": "2b4ce294-0ab9-11e7-8058-026ea687824d",
> >   "counter_type": "delta"},
> >
>
> when posting samples to ceilometer-api, the data goes through pipeline
> before being stored. therefore, you need notification-agent enabled AND
> you need to make sure the pipeline.yaml accepts the meter.
>
> --
> gord
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaasv2] Migrate LBaaS instance

2017-03-20 Thread Michael Johnson
Hi Saverio,

First, please note, in the future the best tag for load balancing is
[octavia] as it is no longer part of the neutron project.

I am sorry that you are so anxious and confused about the current state of
load balancing for OpenStack.  Let me clarify a few things:

1. LBaaSv2 is not going away and is not deprecated.  The neutron-lbaas code
base is going into deprecation in favor of the octavia code base.  I will
highlight two things, among others, we are doing to ease this transition for
operators:
a. For some time into the future you will be able to continue to use LBaaSv2
via neutron using the proxy driver in neutron-lbaas.
b. There will be migration procedures and scripts that will move, in place,
load balancers from neutron-lbaas into octavia.
2. Deprecation means we will not continue to develop features for
neutron-lbaas, but it will remain in the code base for at least two more
releases and continue to receive bug fixes.  It's a formal way of saying,
hey, in the future we are going to remove this.
3. New features will be added to the octavia code base.  It is only
neutron-lbaas that will be going into feature freeze for new feature
development due to the transition.
4. Any tools written against the neutron endpoint for neutron-lbaas using
the LBaaSv2 API will work with Octavia by updating the endpoint you are
pointing to from neutron to octavia.
5. We are not making any changes to stable/liberty, stable/mitaka,
stable/newton, or stable/ocata releases.  I will note, per OpenStack stable
release policy, liberty is EOL and mitaka will be next month and we are not
allowed to add new features to any previous releases.   Please see the
OpenStack stable policy here:
https://docs.openstack.org/project-team-guide/stable-branches.html
6. Octavia was available and, in fact, the reference load balancing driver
in Liberty.
7. Multiple operators are represented on the core review team for octavia.
We try really hard to listen to feedback we get and to do what is best for
folks using load balancing in OpenStack.  It is unfortunate our
presentations at the Barcelona summit were denied and we did not get an
opportunity to share our plan with the community and get feedback.  If you
have concerns I encourage you to reach out to us via our weekly IRC
meetings, our channel on IRC #openstack-lbaas, or via the mailing list with
the [octavia] tag.  As you know, I have been responding to your emails with
load balancing questions.

To answer Zhi's e-mail:

This is correct, if you are using the legacy haproxy namespace driver, and
not the octavia driver, there is currently no easy method to migrate the
ownership of a load balancer from one agent to another.
The legacy haproxy namespace driver is/was not intended for high
availability.  If you want a highly available open source load balancing
option, I highly recommend you use the octavia driver instead of the haproxy
namespace driver.  It was designed to provide scale and availability.  You
would not have the issue you are describing with the octavia driver.

That said, if you want to continue to develop new features for the haproxy
namespace driver, we should start planning to do so in the octavia code
base.
We will be starting work on a port of the haproxy namespace driver into
octavia soon.  We are however discussing what the future should be for this
driver given its limitations.  I think the best plan will be to port it over
into a standalone driver that folks can contribute to if they have a need
for it and we can deprecate it if there is no longer support for it.

Michael


-Original Message-
From: Saverio Proto [mailto:saverio.pr...@switch.ch] 
Sent: Friday, March 17, 2017 4:55 AM
To: OpenStack Development Mailing List (not for usage questions)

Subject: Re: [openstack-dev] [neutron][lbaasv2] Migrate LBaaS instance

Hello there,

I am just back from the Ops Midcycle where Heidi Joy Tretheway reported some
data from the user survey.

So if we look at deployments with more than 100 servers NO ONE IS USING
NEWTON yet. And I scream this loud. Everyone is still in Liberty or Mitaka.

I am just struggling to upgrade to LBaaSv2 to hear that is already going
into deprecation. The feature Zhi is proposing is important also for me once
I go to production.

I would encourage devs to listen more to operators feedback. Also you devs
cant just ignore that users are running still Liberty/Mitaka so you need to
change something in this way of working or all the users will run away.

thank you

Saverio


On 16/03/17 16:26, Kosnik, Lubosz wrote:
> Hello Zhi,
> Just one small information. Yesterday on Octavia weekly meeting we 
> decided that we're gonna add new features to LBaaSv2 till Pike-1 so 
> the windows is very small.
> This decision was made as LBaaSv2 is currently Octavia delivery, not 
> Neutron anymore and this project is going into deprecations stage.
> 
> Cheers,
> Lubosz
> 
>> On Mar 16, 2017, at 5:39 AM, zhi 

Re: [openstack-dev] questions about Openstack Ocata replacement API

2017-03-20 Thread Matt Riedemann

On 3/19/2017 6:03 AM, Chris Dent wrote:

On Sun, 19 Mar 2017, zhihao wang wrote:


I am trying the new version openstack ocata in Ubuntu 1604

But I got some problem with nova placement API, there is nothing on
the Openstack Ubuntu Installation Doc


The docs for that are being updated to include the necessary
information. If you at https://review.openstack.org/#/c/438328/
you'll see the new information there. A rendered version will be at
http://docs-draft.openstack.org/28/438328/12/check/gate-openstack-manuals-tox-doc-publish-checkbuild/846ac33//publish-docs/draft/install-guide-ubuntu/nova.html



I already have the endpoint, but it always said there is no placement
API endpoint  , please see below


As one of the responses on ask.openstack says, depending on which
version of the packages you have you may need to add to your apache2
config something like:


 Require all granted


I believe this has been resolved in newer versions of the packaging.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Wally,

I believe we got you sorted out in the nova IRC channel today, correct? 
If so, can you please highlight here the major issues with the 
deployment and what you needed to change to get things working? I know 
the keystone_authtoken config in nova.conf was causing some issues with 
getting the placement service user to get a token from the nova services.


And you updated your httpd placement config based on the latest from the 
ubuntu packages.


You did some debugging with curl too.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Qiming Teng
On Mon, Mar 20, 2017 at 03:35:18PM -0400, Jay Pipes wrote:
> On 03/20/2017 03:08 PM, Adrian Otto wrote:
> >Team,
> >
> >Stephen Watson has been working on an magnum feature to add magnum commands 
> >to the openstack client by implementing a plugin:
> >
> >https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclient+osc
> >
> >In review of this work, a question has resurfaced, as to what the client 
> >command name should be for magnum related commands. Naturally, we’d like to 
> >have the name “cluster” but that word is already in use by Senlin.
> 
> Unfortunately, the Senlin API uses a whole bunch of generic terms as
> top-level REST resources, including "cluster", "event", "action",
> "profile", "policy", and "node". :( I've warned before that use of
> these generic terms in OpenStack APIs without a central group
> responsible for curating the API would lead to problems like this.
> This is why, IMHO, we need the API working group to be ultimately
> responsible for preventing this type of thing from happening.
> Otherwise, there ends up being a whole bunch of duplication and same
> terms being used for entirely different things.
> 

Well, I believe the name and namespaces used by Senlin is very clean.
Please see the following outputs. All commands are contained in the
cluster namespace to avoid any conflicts with any other projects.

On the other hand, is there any document stating that Magnum is about
providing clustering service? Why Magnum cares so much about the top
level noun if it is not its business?


$ openstack --help | grep cluster

  --os-clustering-api-version 

  cluster action list  List actions.
  cluster action show  Show detailed info about the specified action.
  cluster build info  Retrieve build information.
  cluster check  Check the cluster(s).
  cluster collect  Collect attributes across a cluster.
  cluster create  Create the cluster.
  cluster delete  Delete the cluster(s).
  cluster event list  List events.
  cluster event show  Describe the event.
  cluster expand  Scale out a cluster by the specified number of nodes.
  cluster list   List the user's clusters.
  cluster members add  Add specified nodes to cluster.
  cluster members del  Delete specified nodes from cluster.
  cluster members list  List nodes from cluster.
  cluster members replace  Replace the nodes in a cluster with
  specified nodes.
  cluster node check  Check the node(s).
  cluster node create  Create the node.
  cluster node delete  Delete the node(s).
  cluster node list  Show list of nodes.
  cluster node recover  Recover the node(s).
  cluster node show  Show detailed info about the specified node.
  cluster node update  Update the node.
  cluster policy attach  Attach policy to cluster.
  cluster policy binding list  List policies from cluster.
  cluster policy binding show  Show a specific policy that is bound to
  the specified cluster.
  cluster policy binding update  Update a policy's properties on a
  cluster.
  cluster policy create  Create a policy.
  cluster policy delete  Delete policy(s).
  cluster policy detach  Detach policy from cluster.
  cluster policy list  List policies that meet the criteria.
  cluster policy show  Show the policy details.
  cluster policy type list  List the available policy types.
  cluster policy type show  Get the details about a policy type.
  cluster policy update  Update a policy.
  cluster policy validate  Validate a policy.
  cluster profile create  Create a profile.
  cluster profile delete  Delete profile(s).
  cluster profile list  List profiles that meet the criteria.
  cluster profile show  Show profile details.
  cluster profile type list  List the available profile types.
  cluster profile type show  Show the details about a profile type.
  cluster profile update  Update a profile.
  cluster profile validate  Validate a profile.
  cluster receiver create  Create a receiver.
  cluster receiver delete  Delete receiver(s).
  cluster receiver list  List receivers that meet the criteria.
  cluster receiver show  Show the receiver details.
  cluster recover  Recover the cluster(s).
  cluster resize  Resize a cluster.
  cluster runRun scripts on cluster.
  cluster show   Show details of the cluster.
  cluster shrink  Scale in a cluster by the specified number of nodes.
  cluster template list  List Cluster Templates.
  cluster update  Update the cluster.

- Qiming

> >Stephen opened a discussion with Dean Troyer about this, and found
> that “infra” might be a suitable name and began using that, and
> multiple team members are not satisfied with it.
> 
> Yeah, not sure about "infra". That is both too generic and not an
> actual "thing" that Magnum provides.
> 
> > The name “magnum” was excluded from consideration because OSC aims
> to be project name agnostic. We know that no matter what word we
> pick, it’s not going to be ideal. I’ve added an agenda on our
> upcoming team meeting to judge community 

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Dean Troyer
On Mon, Mar 20, 2017 at 5:52 PM, Monty Taylor  wrote:
>> [Hongbin Lu]
>> I think the style would be more consistent if all the resources are 
>> qualified or un-qualified, not the mix of both.

> So - swift got here first, it wins, it gets container. The fine folks in
> barbican, rather than calling a thing a container and then needing to
> call it a secret-container - maybe could call their thing a vault or a
> locker or a safe or a lockbox or an oubliette. (for instance)

Right, there _were_ only 5 projects when we started this and we
re-used most of the original project-specific names.  Swift is a
particularly fun one because both 'container' and 'object' are
extrement useful in that context, but both are also extremely generic,
and 'object container', well, what is that?

> I do not have any suggestions for things that actually return a resource
> that are a single "linux container" - since swift called their thing a
> container before docker was written and popularized the word to mean
> something different. We might just get to be fun and different - sort of
> like how Emacs calls cut/paste "kill" and "yank" (if you're not an Emacs
> user, you "kill" text into the kill ring and then you "yank" from the
> ring into the current document.

Monty, grab your Tardis and follow me around the Austin summit and
listen to the opinions I get for doing things like this :)

> OTOH, I think Dean has talked about more verbose terms and then aliases
> for backwards compat. So maybe a swift container is always an
> "object_container" - but because of history it gets to also be
> unqualified "container" - but then we could have "object container" and
> "secret container" and "linux container" ... similarly we could have
> "server flavor" and "volume flavor" ... etc.

Yes, we do have plans to go back and qualify some of these resource
names to be consistent, but the current names will probably never
change, we'll just have the qualified names for those who prefer to
use them.

Flavor is my favorite example of this as we add network flavor, and
others.  It also illustrates the 'it isn't a namespace' as it will
become 'server flavor' rather than 'compute flavor'.

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [dib] diskimage-builder v2 RC1 release; request for test

2017-03-20 Thread Ian Wienand

On 03/21/2017 03:10 AM, Mikhail Medvedev wrote:

On Fri, Mar 17, 2017 at 3:23 PM, Andre Florath  wrote:
Submitted the bug https://bugs.launchpad.net/diskimage-builder/+bug/1674402


Thanks; some updates there.


Would adding a third-party CI job help? I can put together a
functional job on ppc64. I assume we want a job based on
gate-dib-dsvm-functests-*?


As discussed in #openstack-dib we have this reporting on a group of
the functional tests.  My only concern is biting off more than we can
chew initially and essentially training people that the results are
unreliable.  Once we get over this initial hurdle we can look at
expanding it and voting.

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] vitrage Resource API

2017-03-20 Thread dong.wenjuan
No, in the implemention of these APIs.


see 
https://github.com/openstack/vitrage/blob/master/vitrage/api/controllers/v1/resource.py#L47
















Original Mail



Sender:  <trinath.soman...@nxp.com>
To:  <openstack-dev@lists.openstack.org> <openstack-dev@lists.openstack.org>
Date: 2017/03/21 00:50
Subject: Re: [openstack-dev] [vitrage] vitrage Resource API







In tests?


Get Outlook for iOS



 







From: dong.wenj...@zte.com.cn <dong.wenj...@zte.com.cn>
 Sent: Monday, March 20, 2017 2:49:57 PM
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [vitrage] vitrage Resource API 


 



Hi All,

 

I noticed that the APIs of `resource list` and `resource show`  were mocked.

Is  there any backgroud for the mock or the API is not necessary?

 

BR,

dwj__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][lbaasv2] Migrate LBaaS instance

2017-03-20 Thread German Eichberger
Hi Saverio,

We completely understand that operators are still on older versions of 
OpenStack and we are executing against the  OpenStack release series
[1] and we adhere to the support phases as outlined in [2]. So any new features 
will only be added to Pike whereas bug fixes will be carried back right now 
Ocata and Newton will only see Security Patches. If this doesn’t meet your 
needs I would encourage you to get those guidelines changed. We also frequently 
seek input from operators on new features and/or pain points. I am curious what 
the use case is for a non-scalable/non-HA load balancer like the 
namespace/haproxy driver in production.

Us deprecating LBaaS V2 won’t relieve us from our obligation to provide bug 
fixes and security fixes to the previous versions - furthermore we learned from 
the LBaaS V1 migration and we are committed to providing migration scripts for 
Octavia and potentially the namespace driver. We might need to do a better job 
articulating our migration strategy. 

Thanks,
German

[1] https://releases.openstack.org
[2] https://docs.openstack.org/project-team-guide/stable-branches.html

On 3/17/17, 7:54 AM, "Saverio Proto"  wrote:

Hello there,

I am just back from the Ops Midcycle where Heidi Joy Tretheway reported
some data from the user survey.

So if we look at deployments with more than 100 servers NO ONE IS USING
NEWTON yet. And I scream this loud. Everyone is still in Liberty or Mitaka.

I am just struggling to upgrade to LBaaSv2 to hear that is already going
into deprecation. The feature Zhi is proposing is important also for me
once I go to production.

I would encourage devs to listen more to operators feedback. Also you
devs cant just ignore that users are running still Liberty/Mitaka so you
need to change something in this way of working or all the users will
run away.

thank you

Saverio


On 16/03/17 16:26, Kosnik, Lubosz wrote:
> Hello Zhi,
> Just one small information. Yesterday on Octavia weekly meeting we
> decided that we’re gonna add new features to LBaaSv2 till Pike-1 so the
> windows is very small.
> This decision was made as LBaaSv2 is currently Octavia delivery, not
> Neutron anymore and this project is going into deprecations stage.
> 
> Cheers,
> Lubosz
> 
>> On Mar 16, 2017, at 5:39 AM, zhi > > wrote:
>>
>> Hi, all
>> Currently, LBaaS v2 doesn't support migration. Just like router
>> instances, we can remove a router instance from one L3 agent and add
>> it to another L3 agent.
>>
>> So, there is a single point failure in LBaaS agent. As far as I know,
>> LBaaS supports " allow_automatic_lbaas_agent_failover ". But in many
>> cases, we want to migrate LBaaS instances manually. Do we plan to do 
this?
>>
>> I'm doing this right now. But I meet a question. I define a function
>> in agent_scheduler.py like this:
>>
>> def remove_loadbalancer_from_lbaas_agent(self, context, agent_id,
>> loadbalancer_id):
>> self._unschedule_loadbalancer(context, loadbalancer_id, agent_id)
>>
>> The question is, how do I notify LBaaS agent? 
>>
>> Hope for your reply.
>>
>>
>>
>> Thanks
>> Zhi Chang
>> 
__
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org
>> ?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
SWITCH
Saverio Proto, Peta Solutions
Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland
phone +41 44 268 15 15, direct +41 44 268 1573
saverio.pr...@switch.ch, http://www.switch.ch

http://www.switch.ch/stories

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Monty Taylor
On 03/20/2017 05:39 PM, Hongbin Lu wrote:
> 
> 
>> -Original Message-
>> From: Dean Troyer [mailto:dtro...@gmail.com]
>> Sent: March-20-17 5:19 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [magnum][osc] What name to use for magnum
>> commands in osc?
>>
>> On Mon, Mar 20, 2017 at 3:37 PM, Adrian Otto 
>> wrote:
>>> the  argument is actually the service name, such as “ec2”.
>> This is the same way the openstack cli works. Perhaps there is another
>> tool that you are referring to. Have I misunderstood something?
>>
>> I am going to jump in here and clarify one thing.  OSC does not do
>> project namespacing, or any other sort of namespacing for its resource
>> names.  It uses qualified resource names (fully-qualified even?).  In
>> some cases this results in something that looks a lot like namespacing,
>> but it isn't. The Volume API commands are one example of this, nearly
>> every resource there includes the word 'volume' but not because that is
>> the API name, it is because that is the correct name for those
>> resources ('volume backup', etc).
> 
> [Hongbin Lu] I might provide a minority point of view here. What confused me 
> is inconsistent style of the resource name. For example, there is a 
> "container" resource for a swift container, and there is "secret container" 
> resource a barbican container. I just found it odd to have both un-qualified 
> resource (i.e. container) and qualified resource name (i.e. secret container) 
> in the same CLI. It appears to me that some resources are namespaced and 
> others are not, and this kind of style provides a suboptimal user experiences 
> from my point of view.
> 
> I think the style would be more consistent if all the resources are qualified 
> or un-qualified, not the mix of both.

Yes - if we had been more forward thinking a while back, I think we
could do that. However, some things are already done and changing them
would be an incredible amount of churn.

In my happy world, we would all consider the resource names that exist
across the openstack projects before we make new ones.

So - swift got here first, it wins, it gets container. The fine folks in
barbican, rather than calling a thing a container and then needing to
call it a secret-container - maybe could call their thing a vault or a
locker or a safe or a lockbox or an oubliette. (for instance)

I do not have any suggestions for things that actually return a resource
that are a single "linux container" - since swift called their thing a
container before docker was written and popularized the word to mean
something different. We might just get to be fun and different - sort of
like how Emacs calls cut/paste "kill" and "yank" (if you're not an Emacs
user, you "kill" text into the kill ring and then you "yank" from the
ring into the current document.

OTOH, I think Dean has talked about more verbose terms and then aliases
for backwards compat. So maybe a swift container is always an
"object_container" - but because of history it gets to also be
unqualified "container" - but then we could have "object container" and
"secret container" and "linux container" ... similarly we could have
"server flavor" and "volume flavor" ... etc.

(fwiw, shade just picks winners - so "create_container" gets you a swift
container. No clue what we'll do when we add barbican or zun yet ...
mabye the same thing?)
>>
>>> We could so the same thing and use the text “container_infra”, but we
>> felt that might be burdensome for interactive use and wanted to find
>> something shorter that would still make sense.
>>
>> Naming resources is hard to get right.  Here's my throught process:
>>
>> For OSC, start with how to describe the specific 'thing' being
>> manipulated.  In this case, it is some kind of cluster.  In the list
>> you posted in the first email, 'coe cluster' seems to be the best
>> option.  I think 'coe' is acceptable as an abbreviation (we usually do
>> not use them) because that is a specific term used in the field and
>> satisfies the 'what kind of cluster?' question.  No underscores please,
>> and in fact no dash here, resource names have spaces in them.
>>
>> dt
>>
>> --
>>
>> Dean Troyer
>> dtro...@gmail.com
>>
>> ___
>> ___
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-
>> requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


__
OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Hongbin Lu


> -Original Message-
> From: Dean Troyer [mailto:dtro...@gmail.com]
> Sent: March-20-17 5:19 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum][osc] What name to use for magnum
> commands in osc?
> 
> On Mon, Mar 20, 2017 at 3:37 PM, Adrian Otto 
> wrote:
> > the  argument is actually the service name, such as “ec2”.
> This is the same way the openstack cli works. Perhaps there is another
> tool that you are referring to. Have I misunderstood something?
> 
> I am going to jump in here and clarify one thing.  OSC does not do
> project namespacing, or any other sort of namespacing for its resource
> names.  It uses qualified resource names (fully-qualified even?).  In
> some cases this results in something that looks a lot like namespacing,
> but it isn't. The Volume API commands are one example of this, nearly
> every resource there includes the word 'volume' but not because that is
> the API name, it is because that is the correct name for those
> resources ('volume backup', etc).

[Hongbin Lu] I might provide a minority point of view here. What confused me is 
inconsistent style of the resource name. For example, there is a "container" 
resource for a swift container, and there is "secret container" resource a 
barbican container. I just found it odd to have both un-qualified resource 
(i.e. container) and qualified resource name (i.e. secret container) in the 
same CLI. It appears to me that some resources are namespaced and others are 
not, and this kind of style provides a suboptimal user experiences from my 
point of view.

I think the style would be more consistent if all the resources are qualified 
or un-qualified, not the mix of both.

> 
> > We could so the same thing and use the text “container_infra”, but we
> felt that might be burdensome for interactive use and wanted to find
> something shorter that would still make sense.
> 
> Naming resources is hard to get right.  Here's my throught process:
> 
> For OSC, start with how to describe the specific 'thing' being
> manipulated.  In this case, it is some kind of cluster.  In the list
> you posted in the first email, 'coe cluster' seems to be the best
> option.  I think 'coe' is acceptable as an abbreviation (we usually do
> not use them) because that is a specific term used in the field and
> satisfies the 'what kind of cluster?' question.  No underscores please,
> and in fact no dash here, resource names have spaces in them.
> 
> dt
> 
> --
> 
> Dean Troyer
> dtro...@gmail.com
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Clint Byrum
Excerpts from Adrian Otto's message of 2017-03-20 22:19:14 +:
> I was unsure, so I found him on IRC to clarify, and he pointed me to the 
> openstack/service-types-authority repository, where I submitted patch 445694 
> for review. We have three distinct identifiers in play:
> 
> 1) Our existing service catalog entry name: container-infra
> 2) Our openstack client noun: TBD, decision expected from our team tomorrow. 
> My suggestion: "coe cluster”.
> 3) Our (proposed) service type: coe-cluster
> 
> Each identifier has respective guidelines and limits, so they differ.
> 
> Adrian

Oh neat, I didn't even know that repository existed. TIL.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Adrian Otto
Clint,

On Mar 20, 2017, at 3:02 PM, Clint Byrum 
> wrote:

Excerpts from Adrian Otto's message of 2017-03-20 21:16:09 +:
Jay,

On Mar 20, 2017, at 12:35 PM, Jay Pipes 
> 
wrote:

On 03/20/2017 03:08 PM, Adrian Otto wrote:
Team,

Stephen Watson has been working on an magnum feature to add magnum commands to 
the openstack client by implementing a plugin:

https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclient+osc

In review of this work, a question has resurfaced, as to what the client 
command name should be for magnum related commands. Naturally, we’d like to 
have the name “cluster” but that word is already in use by Senlin.

Unfortunately, the Senlin API uses a whole bunch of generic terms as top-level 
REST resources, including "cluster", "event", "action", "profile", "policy", 
and "node". :( I've warned before that use of these generic terms in OpenStack 
APIs without a central group responsible for curating the API would lead to 
problems like this. This is why, IMHO, we need the API working group to be 
ultimately responsible for preventing this type of thing from happening. 
Otherwise, there ends up being a whole bunch of duplication and same terms 
being used for entirely different things.

Stephen opened a discussion with Dean Troyer about this, and found that “infra” 
might be a suitable name and began using that, and multiple team members are 
not satisfied with it.

Yeah, not sure about "infra". That is both too generic and not an actual 
"thing" that Magnum provides.

The name “magnum” was excluded from consideration because OSC aims to be 
project name agnostic. We know that no matter what word we pick, it’s not going 
to be ideal. I’ve added an agenda on our upcoming team meeting to judge 
community consensus about which alternative we should select:

https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2017-03-21_1600_UTC

Current choices on the table are:

* c_cluster (possible abbreviation alias for container_infra_cluster)
* coe_cluster
* mcluster
* infra

For example, our selected name would appear in “openstack …” commands. Such as:

$ openstack c_cluster create …

If you have input to share, I encourage you to reply to this thread, or come to 
the team meeting so we can consider your input before the team makes a 
selection.

What is Magnum's service-types-authority service_type?

I propose "coe-cluster” for that, but that should be discussed further, as it’s 
impossible for magnum to conform with all the requirements for service types 
because they fundamentally conflict with each other:

https://review.openstack.org/447694

In the past we referred to this type as a “bay” but found it burdensome for 
users and operators to use that term when literally bay == cluster. We just 
needed to call it what it is because there’s a prevailing name for that 
concept, and everyone expects that’s what it’s called.

I Think Jay was asking for Magnum's name in the catalog:

Which is 'container-infra' according to this:

https://github.com/openstack/python-magnumclient/blob/master/magnumclient/v1/client.py#L34

I was unsure, so I found him on IRC to clarify, and he pointed me to the 
openstack/service-types-authority repository, where I submitted patch 445694 
for review. We have three distinct identifiers in play:

1) Our existing service catalog entry name: container-infra
2) Our openstack client noun: TBD, decision expected from our team tomorrow. My 
suggestion: "coe cluster”.
3) Our (proposed) service type: coe-cluster

Each identifier has respective guidelines and limits, so they differ.

Adrian
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Clint Byrum
Excerpts from Adrian Otto's message of 2017-03-20 21:16:09 +:
> Jay,
> 
> On Mar 20, 2017, at 12:35 PM, Jay Pipes 
> > wrote:
> 
> On 03/20/2017 03:08 PM, Adrian Otto wrote:
> Team,
> 
> Stephen Watson has been working on an magnum feature to add magnum commands 
> to the openstack client by implementing a plugin:
> 
> https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclient+osc
> 
> In review of this work, a question has resurfaced, as to what the client 
> command name should be for magnum related commands. Naturally, we’d like to 
> have the name “cluster” but that word is already in use by Senlin.
> 
> Unfortunately, the Senlin API uses a whole bunch of generic terms as 
> top-level REST resources, including "cluster", "event", "action", "profile", 
> "policy", and "node". :( I've warned before that use of these generic terms 
> in OpenStack APIs without a central group responsible for curating the API 
> would lead to problems like this. This is why, IMHO, we need the API working 
> group to be ultimately responsible for preventing this type of thing from 
> happening. Otherwise, there ends up being a whole bunch of duplication and 
> same terms being used for entirely different things.
> 
> >Stephen opened a discussion with Dean Troyer about this, and found that 
> >“infra” might be a suitable name and began using that, and multiple team 
> >members are not satisfied with it.
> 
> Yeah, not sure about "infra". That is both too generic and not an actual 
> "thing" that Magnum provides.
> 
> > The name “magnum” was excluded from consideration because OSC aims to be 
> > project name agnostic. We know that no matter what word we pick, it’s not 
> > going to be ideal. I’ve added an agenda on our upcoming team meeting to 
> > judge community consensus about which alternative we should select:
> 
> https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2017-03-21_1600_UTC
> 
> Current choices on the table are:
> 
>  * c_cluster (possible abbreviation alias for container_infra_cluster)
>  * coe_cluster
>  * mcluster
>  * infra
> 
> For example, our selected name would appear in “openstack …” commands. Such 
> as:
> 
> $ openstack c_cluster create …
> 
> If you have input to share, I encourage you to reply to this thread, or come 
> to the team meeting so we can consider your input before the team makes a 
> selection.
> 
> What is Magnum's service-types-authority service_type?
> 
> I propose "coe-cluster” for that, but that should be discussed further, as 
> it’s impossible for magnum to conform with all the requirements for service 
> types because they fundamentally conflict with each other:
> 
> https://review.openstack.org/447694
> 
> In the past we referred to this type as a “bay” but found it burdensome for 
> users and operators to use that term when literally bay == cluster. We just 
> needed to call it what it is because there’s a prevailing name for that 
> concept, and everyone expects that’s what it’s called.

I Think Jay was asking for Magnum's name in the catalog:

Which is 'container-infra' according to this:

https://github.com/openstack/python-magnumclient/blob/master/magnumclient/v1/client.py#L34

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ironic] Kubernetes-based long running processes

2017-03-20 Thread Lingxian Kong
I thought somebody already asked in Mistral IRC channel, IMO, Mistral is a
good candidate in this case, TripleO[1] already use Mistral for running its
own tasks. For those who didn't know Mistral well, Mistral is a workflow
engine that can run you designed workflow in a stable, scalable way. For
your use case, the only thing needs to consider is if you need to provide
your customized actions (like what TripleO does). Feel free to jump in
#openstack-mistral if you have more questions.

[1]: https://github.com/openstack/tripleo-common


Cheers,
Lingxian Kong (Larry)

On Thu, Mar 16, 2017 at 11:28 AM, Taryma, Joanna 
wrote:

> Hi all,
>
>
>
> There was an idea of using Kubernetes to handle long running processes for
> Ironic [0]. It could be useful for example for graphical and serial
> consoles or improving scalability (and possibly for other long-running
> processes in the future). Kubernetes would be used as a backend for running
> processes (as containers).
>
> However, the complexity of adding this to ironic would be a too laborious,
> considering the use case. At the PTG it was decided not to implement it
> within ironic, but in the future ironic may adopt such solution if it’s
> common.
>
>
>
> I’m reaching out to you to ask if you’re aware of any other use cases that
> could leverage such solution. If there’s a need for it in other project, it
> may be a good idea to implement this in some sort of a common place.
>
>
>
> Kind regards,
>
> Joanna
>
>
>
> [0] https://review.openstack.org/#/c/431605/
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [User-committee] Boston Forum - Formal Submission Now Open!

2017-03-20 Thread Emilien Macchi
+openstack-dev mailing-list.

On Mon, Mar 20, 2017 at 3:55 PM, Melvin Hillsman  wrote:
> Hey everyone!
>
> We have made it to the next stage of the topic selection process for the
> Forum in Boston.
>
> Starting today, our submission tool is open for you to submit abstracts for
> the most popular sessions that came out of your brainstorming. Please note
> that the etherpads are not being pulled into the submission tool and
> discussion around which sessions to submit are encouraged.
>
> We are asking all session leaders to submit their abstracts at:
>
> http://forumtopics.openstack.org/
>
> before 11:59PM UTC on Sunday April 2nd!
>
> We are looking for a good mix of project-specific, cross-project or
> strategic/whole-of-community discussions, and sessions that emphasize
> collaboration between users and developers are most welcome!
>
> We assume that anything submitted to the system has achieved a good amount
> of discussion and consensus that it is a worthwhile topic. After submissions
> close, a team of representatives from the User Committee, the Technical
> Committee, and Foundation staff will take the sessions proposed by the
> community and fill out the schedule.
>
> You can expect the draft schedule to be released on April 10th.
>
> Further details about the Forum can be found at:
> https://wiki.openstack.org/wiki/Forum
>
> Regards,
>
> OpenStack User Committee
>
>
> ___
> User-committee mailing list
> user-commit...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Dean Troyer
On Mon, Mar 20, 2017 at 4:36 PM, Adrian Otto  wrote:
> So, to be clear, this would result in the following command for what we 
> currently use “magnum cluster create” for:
>
> openstack coe cluster create …
>
> Is this right?

Yes.

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Adrian Otto
Dean,

Thanks for your reply.

> On Mar 20, 2017, at 2:18 PM, Dean Troyer  wrote:
> 
> On Mon, Mar 20, 2017 at 3:37 PM, Adrian Otto  
> wrote:
>> the  argument is actually the service name, such as “ec2”. This is 
>> the same way the openstack cli works. Perhaps there is another tool that you 
>> are referring to. Have I misunderstood something?
> 
> I am going to jump in here and clarify one thing.  OSC does not do
> project namespacing, or any other sort of namespacing for its resource
> names.  It uses qualified resource names (fully-qualified even?).  In
> some cases this results in something that looks a lot like
> namespacing, but it isn't. The Volume API commands are one example of
> this, nearly every resource there includes the word 'volume' but not
> because that is the API name, it is because that is the correct name
> for those resources ('volume backup', etc).

Okay, that makes sense, thanks.

>> We could so the same thing and use the text “container_infra”, but we felt 
>> that might be burdensome for interactive use and wanted to find something 
>> shorter that would still make sense.
> 
> Naming resources is hard to get right.  Here's my throught process:
> 
> For OSC, start with how to describe the specific 'thing' being
> manipulated.  In this case, it is some kind of cluster.  In the list
> you posted in the first email, 'coe cluster' seems to be the best
> option.  I think 'coe' is acceptable as an abbreviation (we usually do
> not use them) because that is a specific term used in the field and
> satisfies the 'what kind of cluster?' question.  No underscores
> please, and in fact no dash here, resource names have spaces in them.

So, to be clear, this would result in the following command for what we 
currently use “magnum cluster create” for:

openstack coe cluster create …

Is this right?

Adrian

> 
> dt
> 
> -- 
> 
> Dean Troyer
> dtro...@gmail.com
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Dean Troyer
On Mon, Mar 20, 2017 at 3:37 PM, Adrian Otto  wrote:
> the  argument is actually the service name, such as “ec2”. This is 
> the same way the openstack cli works. Perhaps there is another tool that you 
> are referring to. Have I misunderstood something?

I am going to jump in here and clarify one thing.  OSC does not do
project namespacing, or any other sort of namespacing for its resource
names.  It uses qualified resource names (fully-qualified even?).  In
some cases this results in something that looks a lot like
namespacing, but it isn't. The Volume API commands are one example of
this, nearly every resource there includes the word 'volume' but not
because that is the API name, it is because that is the correct name
for those resources ('volume backup', etc).

> We could so the same thing and use the text “container_infra”, but we felt 
> that might be burdensome for interactive use and wanted to find something 
> shorter that would still make sense.

Naming resources is hard to get right.  Here's my throught process:

For OSC, start with how to describe the specific 'thing' being
manipulated.  In this case, it is some kind of cluster.  In the list
you posted in the first email, 'coe cluster' seems to be the best
option.  I think 'coe' is acceptable as an abbreviation (we usually do
not use them) because that is a specific term used in the field and
satisfies the 'what kind of cluster?' question.  No underscores
please, and in fact no dash here, resource names have spaces in them.

dt

-- 

Dean Troyer
dtro...@gmail.com

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Adrian Otto
Jay,

On Mar 20, 2017, at 12:35 PM, Jay Pipes 
> wrote:

On 03/20/2017 03:08 PM, Adrian Otto wrote:
Team,

Stephen Watson has been working on an magnum feature to add magnum commands to 
the openstack client by implementing a plugin:

https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclient+osc

In review of this work, a question has resurfaced, as to what the client 
command name should be for magnum related commands. Naturally, we’d like to 
have the name “cluster” but that word is already in use by Senlin.

Unfortunately, the Senlin API uses a whole bunch of generic terms as top-level 
REST resources, including "cluster", "event", "action", "profile", "policy", 
and "node". :( I've warned before that use of these generic terms in OpenStack 
APIs without a central group responsible for curating the API would lead to 
problems like this. This is why, IMHO, we need the API working group to be 
ultimately responsible for preventing this type of thing from happening. 
Otherwise, there ends up being a whole bunch of duplication and same terms 
being used for entirely different things.

>Stephen opened a discussion with Dean Troyer about this, and found that 
>“infra” might be a suitable name and began using that, and multiple team 
>members are not satisfied with it.

Yeah, not sure about "infra". That is both too generic and not an actual 
"thing" that Magnum provides.

> The name “magnum” was excluded from consideration because OSC aims to be 
> project name agnostic. We know that no matter what word we pick, it’s not 
> going to be ideal. I’ve added an agenda on our upcoming team meeting to judge 
> community consensus about which alternative we should select:

https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2017-03-21_1600_UTC

Current choices on the table are:

 * c_cluster (possible abbreviation alias for container_infra_cluster)
 * coe_cluster
 * mcluster
 * infra

For example, our selected name would appear in “openstack …” commands. Such as:

$ openstack c_cluster create …

If you have input to share, I encourage you to reply to this thread, or come to 
the team meeting so we can consider your input before the team makes a 
selection.

What is Magnum's service-types-authority service_type?

I propose "coe-cluster” for that, but that should be discussed further, as it’s 
impossible for magnum to conform with all the requirements for service types 
because they fundamentally conflict with each other:

https://review.openstack.org/447694

In the past we referred to this type as a “bay” but found it burdensome for 
users and operators to use that term when literally bay == cluster. We just 
needed to call it what it is because there’s a prevailing name for that 
concept, and everyone expects that’s what it’s called.

Adrian


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Adrian Otto
Hongbin,

> On Mar 20, 2017, at 1:10 PM, Hongbin Lu  wrote:
> 
> Zun had a similar issue of colliding on the keyword "container", and we chose 
> to use an alternative term "appcontainer" that is not perfect but acceptable. 
> IMHO, this kind of top-level name collision issue would be better resolved by 
> introducing namespace per project, which is the approach adopted by AWS.

Can you explain this further, please? My understanding is that the AWS cli tool 
has a single global namespace for commands in the form:

aws [options]   [parameters]

the  argument is actually the service name, such as “ec2”. This is the 
same way the openstack cli works. Perhaps there is another tool that you are 
referring to. Have I misunderstood something?

We could so the same thing and use the text “container_infra”, but we felt that 
might be burdensome for interactive use and wanted to find something shorter 
that would still make sense.

Thanks,

Adrian

> 
> Best regards,
> Hongbin
> 
>> -Original Message-
>> From: Jay Pipes [mailto:jaypi...@gmail.com]
>> Sent: March-20-17 3:35 PM
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [magnum][osc] What name to use for magnum
>> commands in osc?
>> 
>> On 03/20/2017 03:08 PM, Adrian Otto wrote:
>>> Team,
>>> 
>>> Stephen Watson has been working on an magnum feature to add magnum
>> commands to the openstack client by implementing a plugin:
>>> 
>>> 
>> https://review.openstack.org/#/q/status:open+project:openstack/python-
>>> magnumclient+osc
>>> 
>>> In review of this work, a question has resurfaced, as to what the
>> client command name should be for magnum related commands. Naturally,
>> we’d like to have the name “cluster” but that word is already in use by
>> Senlin.
>> 
>> Unfortunately, the Senlin API uses a whole bunch of generic terms as
>> top-level REST resources, including "cluster", "event", "action",
>> "profile", "policy", and "node". :( I've warned before that use of
>> these generic terms in OpenStack APIs without a central group
>> responsible for curating the API would lead to problems like this. This
>> is why, IMHO, we need the API working group to be ultimately
>> responsible for preventing this type of thing from happening. Otherwise,
>> there ends up being a whole bunch of duplication and same terms being
>> used for entirely different things.
>> 
>>> Stephen opened a discussion with Dean Troyer about this, and found
>> that “infra” might be a suitable name and began using that, and
>> multiple team members are not satisfied with it.
>> 
>> Yeah, not sure about "infra". That is both too generic and not an
>> actual "thing" that Magnum provides.
>> 
>>> The name “magnum” was excluded from consideration because OSC aims
>> to be project name agnostic. We know that no matter what word we pick,
>> it’s not going to be ideal. I’ve added an agenda on our upcoming team
>> meeting to judge community consensus about which alternative we should
>> select:
>>> 
>>> https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2017-
>> 03
>>> -21_1600_UTC
>>> 
>>> Current choices on the table are:
>>> 
>>>  * c_cluster (possible abbreviation alias for
>> container_infra_cluster)
>>>  * coe_cluster
>>>  * mcluster
>>>  * infra
>>> 
>>> For example, our selected name would appear in “openstack …” commands.
>> Such as:
>>> 
>>> $ openstack c_cluster create …
>>> 
>>> If you have input to share, I encourage you to reply to this thread,
>> or come to the team meeting so we can consider your input before the
>> team makes a selection.
>> 
>> What is Magnum's service-types-authority service_type?
>> 
>> Best,
>> -jay
>> 
>> ___
>> ___
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-
>> requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [charms] Charms IRC meetings

2017-03-20 Thread Alex Kavanagh
Hi

This is just a quick reminder that there are two meetings at different
times for folks interested in discussing the OpenStack charms.  This is so
we can get more geographic coverage:

 - ODD weeks (next on Monday 27th March at 10:00 UTC)
 - EVEN weeks (next on Monday 3rd April at 17:00 UTC)

The agenda and previous minutes can be found at:
https://etherpad.openstack.org/p/openstack-charms-weekly-meeting

Full details of the meetings is detailed at:
https://docs.openstack.org/developer/charm-guide/meetings.html

Look forward to meeting you in the future!
Kind regards
Alex.

-- 
Alex Kavanagh - Software Engineer
Cloud Dev Ops - Solutions & Product Engineering - Canonical Ltd
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer]Can't find meter anywhere with ceilometer post REST API

2017-03-20 Thread gordon chung


On 18/03/17 04:54 AM, Hui Xiang wrote:
> Hi folks,
>
>   I am trying to post samples from third part software to ceilometer via
> the REST API as below with Mitaka version. I can see ceilometer-api has
> received this post, and seems forwarded to ceilometer notification agent
> through RMQ.
>

first and most importantly, the ceilometer-api is deprecated and not 
supported upstream anymore. please use gnocchi for proper time series 
storage (or whatever storage solution you feel comfortable with)

>
> 2. LOG
> 56:17] "*POST /v2/meters/interface.if_packets HTTP/1.1*" 201 -
> 2017-03-17 16:56:17.378 52955 DEBUG oslo_messaging._drivers.amqpdriver
> [req-1c4ea84d-ea53-4518-81ea-6c0bffa9745d
> 5457b977c25e4498a31a3c1c78829631 5b1525a8eb2d4739a83b296682aed023 - - -]
> CAST unique_id: 64a6bae3bbcc4b7dab4dceb13cf7f81b NOTIFY exchange
> 'ceilometer' topic 'notifications.sample' _send
> /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:438
> 2017-03-17 16:56:17.382 52955 INFO werkzeug
> [req-1c4ea84d-ea53-4518-81ea-6c0bffa9745d
> 5457b977c25e4498a31a3c1c78829631 5b1525a8eb2d4739a83b296682aed023 - - -]
> 192.168.0.3 - - [17/Mar/2017
>
>
> 3. REST API return result
> [{"counter_name": "interface.if_errors",
>   "user_id": "5457b977c25e4498a31a3c1c78829631",
>   "resource_id": "localhost-ovs-system",
>   "timestamp": "2017-03-17T02:26:46",
>   "resource_metadata": {},
>   "source": "5b1525a8eb2d4739a83b296682aed023:collectd",
>   "counter_unit": "Errors/s",
>   "counter_volume": 0.0,
>   "project_id": "5b1525a8eb2d4739a83b296682aed023",
>   "message_id": "2b4ce294-0ab9-11e7-8058-026ea687824d",
>   "counter_type": "delta"},
>

when posting samples to ceilometer-api, the data goes through pipeline 
before being stored. therefore, you need notification-agent enabled AND 
you need to make sure the pipeline.yaml accepts the meter.

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] bug deputy report (Mar 14 - Mar 20)

2017-03-20 Thread Devale, Sindhu
Hi all,

I served as the bug deputy for the week of March 14th – March 20th.

There were around 25 bugs and three new RFE’s.

There was some discussion/confusion regarding this bug: 
https://bugs.launchpad.net/neutron/+bug/1672629 on IRC.
Review for the fix patch: https://review.openstack.org/#/c/445345/  or comment 
on the bug would be helpful.

There were couple of gate related bugs, but they were addressed immediately and 
fix released.

Few of the open ones which need attention are:
https://bugs.launchpad.net/neutron/+bug/1673124
https://bugs.launchpad.net/neutron/+bug/1674443

RFE’s :
https://bugs.launchpad.net/neutron/+bug/1672852
https://bugs.launchpad.net/neutron/+bug/1673142
https://bugs.launchpad.net/neutron/+bug/1674349


Thank you,
Sindhu (irc: sindhu)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Hongbin Lu
Zun had a similar issue of colliding on the keyword "container", and we chose 
to use an alternative term "appcontainer" that is not perfect but acceptable. 
IMHO, this kind of top-level name collision issue would be better resolved by 
introducing namespace per project, which is the approach adopted by AWS.

Best regards,
Hongbin

> -Original Message-
> From: Jay Pipes [mailto:jaypi...@gmail.com]
> Sent: March-20-17 3:35 PM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [magnum][osc] What name to use for magnum
> commands in osc?
> 
> On 03/20/2017 03:08 PM, Adrian Otto wrote:
> > Team,
> >
> > Stephen Watson has been working on an magnum feature to add magnum
> commands to the openstack client by implementing a plugin:
> >
> >
> https://review.openstack.org/#/q/status:open+project:openstack/python-
> > magnumclient+osc
> >
> > In review of this work, a question has resurfaced, as to what the
> client command name should be for magnum related commands. Naturally,
> we’d like to have the name “cluster” but that word is already in use by
> Senlin.
> 
> Unfortunately, the Senlin API uses a whole bunch of generic terms as
> top-level REST resources, including "cluster", "event", "action",
> "profile", "policy", and "node". :( I've warned before that use of
> these generic terms in OpenStack APIs without a central group
> responsible for curating the API would lead to problems like this. This
> is why, IMHO, we need the API working group to be ultimately
> responsible for preventing this type of thing from happening. Otherwise,
> there ends up being a whole bunch of duplication and same terms being
> used for entirely different things.
> 
>  >Stephen opened a discussion with Dean Troyer about this, and found
> that “infra” might be a suitable name and began using that, and
> multiple team members are not satisfied with it.
> 
> Yeah, not sure about "infra". That is both too generic and not an
> actual "thing" that Magnum provides.
> 
>  > The name “magnum” was excluded from consideration because OSC aims
> to be project name agnostic. We know that no matter what word we pick,
> it’s not going to be ideal. I’ve added an agenda on our upcoming team
> meeting to judge community consensus about which alternative we should
> select:
> >
> > https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2017-
> 03
> > -21_1600_UTC
> >
> > Current choices on the table are:
> >
> >   * c_cluster (possible abbreviation alias for
> container_infra_cluster)
> >   * coe_cluster
> >   * mcluster
> >   * infra
> >
> > For example, our selected name would appear in “openstack …” commands.
> Such as:
> >
> > $ openstack c_cluster create …
> >
> > If you have input to share, I encourage you to reply to this thread,
> or come to the team meeting so we can consider your input before the
> team makes a selection.
> 
> What is Magnum's service-types-authority service_type?
> 
> Best,
> -jay
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [os-upstream-institute] First meeting is starting :)

2017-03-20 Thread Ildiko Vancsa
Hi All,

Quick reminder, we have our first meeting now! :)

Thanks,
Ildikó


> On 2017. Mar 19., at 14:14, Ildiko Vancsa  wrote:
> 
> Hi All,
> 
> Based on the results of the Doodle poll I sent out earlier the most favorite 
> slot for the meeting is __Mondays, 2000 UTC__.
> 
> In order to get progress with the training preparation for Boston we will 
> hold our first meeting on __March 20, at 2000 UTC__. The meeting channel is 
> __#openstack-meeting-3__. You can find and extend the agenda on the meetings 
> etherpad [2].
> 
> I uploaded a patch for review [1] to register the meeting slot as a permanent 
> meeting on this channel.
> 
> We will look into alternatives to keep those of you involved and up to date 
> for whom this slot is unfortunately does not work.
> 
> Please let me know if you have any questions or comments.
> 
> Thanks and Best Regards,
> Ildikó
> IRC: ildikov
> 
> 
> [1] https://review.openstack.org/447291 
> [2] https://etherpad.openstack.org/p/openstack-upstream-institute-meetings 
>  

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] this week's priorities and subteam reports

2017-03-20 Thread Yeleswarapu, Ramamani
Hi,

We are happy to present this week's priorities and subteam report for Ironic. 
As usual, this is pulled directly from the Ironic whiteboard[0] and formatted.

This Week's Priorities (as of the weekly ironic meeting)

1. update/review next BFV patch: https://review.openstack.org/#/c/355625/
2. update/review next rescue patches: https://review.openstack.org/#/c/350831/ 
and https://review.openstack.org/#/c/353156/
3. redfish driver: spec update https://review.openstack.org/#/c/445478/
4. review e-tags spec: https://review.openstack.org/#/c/381991/
5. next driver comp client patch: https://review.openstack.org/#/c/419274/


Bugs (dtantsur, mjturek)

- Stats (diff between 13 Mar 2017 and 20 Mar 2017)
  - Ironic: 238 bugs (+2) + 245 wishlist items (-1). 15 new (+1), 190 in 
progress (-6), 0 critical, 28 high (-1) and 29 incomplete (-1)
  - Inspector: 16 bugs + 29 wishlist items (+1). 4 new (+1), 14 in progress 
(-1), 0 critical, 1 high and 4 incomplete
  - Nova bugs with Ironic tag: 13. 2 new, 0 critical, 0 high

Essential Priorities


CI refactoring and missing test coverage

- Standalone CI tests (vsaienk0)
- patch on review https://review.openstack.org/#/c/423556/ MERGED
- next patch to be reviewed: https://review.openstack.org/#/c/437549
- Missing test coverage (all)
- portgroups and attach/detach tempest tests: 
https://review.openstack.org/382476

Generic boot-from-volume (TheJulia, dtantsur)
-
* trello: https://trello.com/c/UttNjDB7/13-generic-boot-from-volume
- status as of most recent weekly meeting:
- Joanna has been taking on updating/rebasing patches:
- Patch/note tracking etherpad: https://etherpad.openstack.org/p/Ironic-BFV
Ironic Patches:
https://review.openstack.org/#/c/355625/ - Has feedback that needs 
to be addressed
https://review.openstack.org/#/c/366197/ - Has feedback that needs 
to be addressed
https://review.openstack.org/#/c/406290
https://review.openstack.org/#/c/413324 - Has Feedback that needs 
to be addressed
https://review.openstack.org/#/c/214586/ - Volume Connection 
Information Rest API Change - Needs Rebase
Additional patches exist, for python-ironicclient and one for nova.  
Links in the patch/note tracking etherpad.

Rolling upgrades and grenade-partial (rloo, jlvillal)
-
* trello: 
https://trello.com/c/GAlhSzLm/2-rolling-upgrades-and-grenade-with-multi-node
- status as of most recent weekly meeting:
- patches are available, but rloo wants to test so might be best to hold 
off on reviewing because there may be changes
- Testing work:
- 20-Mar-2017: Job running as a non-voting job.
- All that is left to do is after 1-2 weeks to make it a voting job.

Reference architecture guide (jroll)

- no progress this week

Python 3.5 compatibility (JayF, hurricanerix)
-
- (jlvillal) Proposed a patch: https://review.openstack.org/445636

Deploying with Apache and WSGI in CI (vsaienk0)
---
- seems like we can deploy with WSGI, but it still uses a fixed port, instead 
of sub-path
- next one is https://review.openstack.org/#/c/444337/

Driver composition (dtantsur, jroll)

* trello: https://trello.com/c/fTya14y6/14-driver-composition
- gerrit topic: https://review.openstack.org/#/q/status:open+topic:bug/1524745
- status as of most recent weekly meeting:
- TODO as of 20 Mar 2017
- install guide / admin guide docs
- client changes:
- driver commands update: https://review.openstack.org/419274
- node-update update: https://review.openstack.org/#/c/431542/
- new hardware types:
- ilo: https://review.openstack.org/#/c/439404/
- contentious topics:
- what to do about driver properties API and dynamic drivers?
- rloo and dtantsur started brainstorming: 
https://etherpad.openstack.org/p/ironic-driver-properties-reform

Feature parity between two CLIs (rloo, dtantsur)

- OSC driver-properties spec is work in progress: 
https://review.openstack.org/#/c/439907/
- we don't have API to show driver properties for dynamic drivers (we show 
hardware type + default interfaces): 
https://bugs.launchpad.net/ironic/+bug/1671549. This should not be a blocker 
for the missing OSC commands but since this will also need OSC support, it 
might have an impact on the OSC commands we eventually decide on.

OSC default API version change (mariojv, dtantsur)
--
- 3/20 update
- https://review.openstack.org/#/c/442153/ 

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Jay Pipes

On 03/20/2017 03:08 PM, Adrian Otto wrote:

Team,

Stephen Watson has been working on an magnum feature to add magnum commands to 
the openstack client by implementing a plugin:

https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclient+osc

In review of this work, a question has resurfaced, as to what the client 
command name should be for magnum related commands. Naturally, we’d like to 
have the name “cluster” but that word is already in use by Senlin.


Unfortunately, the Senlin API uses a whole bunch of generic terms as 
top-level REST resources, including "cluster", "event", "action", 
"profile", "policy", and "node". :( I've warned before that use of these 
generic terms in OpenStack APIs without a central group responsible for 
curating the API would lead to problems like this. This is why, IMHO, we 
need the API working group to be ultimately responsible for preventing 
this type of thing from happening. Otherwise, there ends up being a 
whole bunch of duplication and same terms being used for entirely 
different things.


>Stephen opened a discussion with Dean Troyer about this, and found 
that “infra” might be a suitable name and began using that, and multiple 
team members are not satisfied with it.


Yeah, not sure about "infra". That is both too generic and not an actual 
"thing" that Magnum provides.


> The name “magnum” was excluded from consideration because OSC aims to 
be project name agnostic. We know that no matter what word we pick, it’s 
not going to be ideal. I’ve added an agenda on our upcoming team meeting 
to judge community consensus about which alternative we should select:


https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2017-03-21_1600_UTC

Current choices on the table are:

  * c_cluster (possible abbreviation alias for container_infra_cluster)
  * coe_cluster
  * mcluster
  * infra

For example, our selected name would appear in “openstack …” commands. Such as:

$ openstack c_cluster create …

If you have input to share, I encourage you to reply to this thread, or come to 
the team meeting so we can consider your input before the team makes a 
selection.


What is Magnum's service-types-authority service_type?

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Adrian Otto
Kevin,

I added that to the list for consideration.Feel free to add others to the list 
on the team agenda using our Wiki page.

Adrian

> On Mar 20, 2017, at 12:27 PM, Fox, Kevin M  wrote:
> 
> What about coe?
> 
> Thanks,
> Kevin
> 
> From: Adrian Otto [adrian.o...@rackspace.com]
> Sent: Monday, March 20, 2017 12:08 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: [openstack-dev] [magnum][osc] What name to use for magnum commands   
>   in osc?
> 
> Team,
> 
> Stephen Watson has been working on an magnum feature to add magnum commands 
> to the openstack client by implementing a plugin:
> 
> https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclient+osc
> 
> In review of this work, a question has resurfaced, as to what the client 
> command name should be for magnum related commands. Naturally, we’d like to 
> have the name “cluster” but that word is already in use by Senlin. Stephen 
> opened a discussion with Dean Troyer about this, and found that “infra” might 
> be a suitable name and began using that, and multiple team members are not 
> satisfied with it. The name “magnum” was excluded from consideration because 
> OSC aims to be project name agnostic. We know that no matter what word we 
> pick, it’s not going to be ideal. I’ve added an agenda on our upcoming team 
> meeting to judge community consensus about which alternative we should select:
> 
> https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2017-03-21_1600_UTC
> 
> Current choices on the table are:
> 
>  * c_cluster (possible abbreviation alias for container_infra_cluster)
>  * coe_cluster
>  * mcluster
>  * infra
> 
> For example, our selected name would appear in “openstack …” commands. Such 
> as:
> 
> $ openstack c_cluster create …
> 
> If you have input to share, I encourage you to reply to this thread, or come 
> to the team meeting so we can consider your input before the team makes a 
> selection.
> 
> Thanks,
> 
> Adrian
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][barbican][castellan] Proposal to rename Castellan to oslo.keymanager

2017-03-20 Thread Morgan Fainberg
On Mon, Mar 20, 2017 at 12:23 PM, Dave McCowan (dmccowan)
 wrote:
> +1 from me.  That looks easy to implement and maintain.
>
> On 3/20/17, 2:49 PM, "Davanum Srinivas"  wrote:
>
>>Dave,
>>
>>Here's the precendent from oslo.policy:
>>https://review.openstack.org/#/admin/groups/556,members
>>
>>The reason for setting it up this way with individuals + oslo core +
>>keystone core is to make sure both core teams are involved in the
>>review process and any future contributors who are not part of either
>>team can be give core rights in oslo.policy.
>>
>>Is it ok to continue this model?
>>
>>Thanks,
>>Dims
>>
>>On Mon, Mar 20, 2017 at 9:20 AM, Dave McCowan (dmccowan)
>> wrote:
>>> This sounds good to me.  I see it as a "promotion" for Castellan into
>>>the
>>> core of OpenStack.  I think a good first step in this direction is to
>>> create a castellan-drivers team in Launchpad and a castellan-core team
>>>in
>>> Gerrit.  We can seed the list with Barbican core reviewers and any Oslo
>>> volunteers.
>>>
>>> The Barbican/Castellan weekly IRC meeting is today at 2000UTC in
>>> #openstack-meeting-alt, if anyone want to join to discuss.
>>>
>>> Thanks!
>>> dave-mccowan
>>>
>>> On 3/16/17, 12:43 PM, "Davanum Srinivas"  wrote:
>>>
+1 from me to bring castellan under Oslo governance with folks from
both oslo and Barbican as reviewers without a project rename. Let's
see if that helps get more adoption of castellan

Thanks,
Dims

On Thu, Mar 16, 2017 at 12:25 PM, Farr, Kaitlin M.
 wrote:
> This thread has generated quite the discussion, so I will try to
> address a few points in this email, echoing a lot of what Dave said.
>
> Clint originally explained what we are trying to solve very well. The
>hope was
> that the rename would emphasize that Castellan is just a basic
> interface that supports operations common between key managers
> (the existing Barbican back end and other back ends that may exist
> in the future), much like oslo.db supports the common operations
> between PostgreSQL and MySQL. The thought was that renaming to have
> oslo part of the name would help reinforce that it's just an
>interface,
> rather than a standalone key manager. Right now, the only Castellan
> back end that would work in DevStack is Barbican. There has been talk
> in the past for creating other Castellan back ends (Vault or Tang),
>but
> no one has committed to writing the code for those yet.
>
> The intended proposal was to rename the project, maintain the current
> review team (which is only a handful of Barbican people), and bring on
> a few Oslo folks, if any were available and interested, to give advice
> about (and +2s for) OpenStack library best practices. However, perhaps
> pulling it under oslo's umbrella without a rename is blessing it
>enough.
>
> In response to Julien's proposal to make Castellan "the way you can do
> key management in Python" -- it would be great if Castellan were that
> abstract, but in practice it is pretty OpenStack-specific. Currently,
> the Barbican team is great at working on key management projects
> (including both Barbican and Castellan), but a lot of our focus now is
> how we can maintain and grow integration with the rest of the
>OpenStack
> projects, for which having the name and expertise of oslo would be a
> great help.
>
> Thanks,
>
> Kaitlin
>
>___
>__
>_
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Davanum Srinivas :: https://twitter.com/dims


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>_
>>>_
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>--
>>Davanum Srinivas :: https://twitter.com/dims
>>
>>__
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> 

Re: [openstack-dev] [infra][tripleo] initial discussion for a new periodic pipeline

2017-03-20 Thread Paul Belanger
On Sun, Mar 19, 2017 at 06:54:27PM +0200, Sagi Shnaidman wrote:
> Hi, Paul
> I would say that real worthwhile try starts from "normal" priority, because
> we want to run promotion jobs more *often*, not more *rarely* which happens
> with low priority.
> In addition the initial idea in the first mail was running them each after
> other almost, not once a day like it happens now or with "low" priority.
> 
As I've said, my main reluctance is is how the gate will react if we create a
new pipeline with the same priority as our check pipeline.  I would much rather
since on caution, default to 'low', see how things react for a day / week /
month, then see what it would like like a normal.  I want us to be caution about
adding a new pipeline, as it dynamically changes how our existing pipelines
function.

Further more, this is actually a capacity issue for tripleo-test-cloud-rh1,
there currently too many jobs running for the amount of hardware. If these jobs
were running on our donated clouds, we could get away with a low priority
periodic pipeline.

Now, allow me to propose another solution.

RDO project has their own version of zuul, which has the ability to do periodic
pipelines.  Since tripleo-test-cloud-rh2 is still around, and has OVB ability, I
would suggest configuring this promoting pipeline within RDO, as to not affect
the capacity of tripleo-test-cloud-rh1.  This now means, you can continuously
enqueue jobs at a rate of 4 hours, priority shouldn't matter as you are the only
jobs running on tripleo-test-cloud-rh2, resulting in faster promotions.

This also make sense, as packaging is done in RDO, and you are triggering Centos
CI things as a result.

> Thanks
> 
> On Wed, Mar 15, 2017 at 11:16 PM, Paul Belanger 
> wrote:
> 
> > On Wed, Mar 15, 2017 at 03:42:32PM -0500, Ben Nemec wrote:
> > >
> > >
> > > On 03/13/2017 02:29 PM, Sagi Shnaidman wrote:
> > > > Hi, all
> > > >
> > > > I submitted a change: https://review.openstack.org/#/c/443964/
> > > > but seems like it reached a point which requires an additional
> > discussion.
> > > >
> > > > I had a few proposals, it's increasing period to 12 hours instead of 4
> > > > for start, and to leave it in regular periodic *low* precedence.
> > > > I think we can start from 12 hours period to see how it goes, although
> > I
> > > > don't think that 4 only jobs will increase load on OVB cloud, it's
> > > > completely negligible comparing to current OVB capacity and load.
> > > > But making its precedence as "low" IMHO completely removes any sense
> > > > from this pipeline to be, because we already run experimental-tripleo
> > > > pipeline which this priority and it could reach timeouts like 7-14
> > > > hours. So let's assume we ran periodic job, it's queued to run now 12 +
> > > > "low queue length" - about 20 and more hours. It's even worse than
> > usual
> > > > periodic job and definitely makes this change useless.
> > > > I'd like to notice as well that those periodic jobs unlike "usual"
> > > > periodic are used for repository promotion and their value are equal or
> > > > higher than check jobs, so it needs to run with "normal" or even "high"
> > > > precedence.
> > >
> > > Yeah, it makes no sense from an OVB perspective to add these as low
> > priority
> > > jobs.  Once in a while we've managed to chew through the entire
> > experimental
> > > queue during the day, but with the containers job added it's very
> > unlikely
> > > that's going to happen anymore.  Right now we have a 4.5 hour wait time
> > just
> > > for the check queue, then there's two hours of experimental jobs queued
> > up
> > > behind that.  All of which means if we started a low priority periodic
> > job
> > > right now it probably wouldn't run until about midnight my time, which I
> > > think is when the regular periodic jobs run now.
> > >
> > Lets just give it a try? A 12 hour periodic job with low priority. There is
> > nothing saying we cannot iterate on this after a few days / weeks / months.
> >
> > > >
> > > > Thanks
> > > >
> > > >
> > > > On Thu, Mar 9, 2017 at 10:06 PM, Wesley Hayutin  > > > > wrote:
> > > >
> > > >
> > > >
> > > > On Wed, Mar 8, 2017 at 1:29 PM, Jeremy Stanley  > > > > wrote:
> > > >
> > > > On 2017-03-07 10:12:58 -0500 (-0500), Wesley Hayutin wrote:
> > > > > The TripleO team would like to initiate a conversation about
> > the
> > > > > possibility of creating a new pipeline in Openstack Infra to
> > allow
> > > > > a set of jobs to run periodically every four hours
> > > > [...]
> > > >
> > > > The request doesn't strike me as contentious/controversial.
> > Why not
> > > > just propose your addition to the zuul/layout.yaml file in the
> > > > openstack-infra/project-config repo and hash out any resulting
> > > > concerns via code review?
> > > > --
> 

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Fox, Kevin M
What about coe?

Thanks,
Kevin

From: Adrian Otto [adrian.o...@rackspace.com]
Sent: Monday, March 20, 2017 12:08 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [magnum][osc] What name to use for magnum commands 
in osc?

Team,

Stephen Watson has been working on an magnum feature to add magnum commands to 
the openstack client by implementing a plugin:

https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclient+osc

In review of this work, a question has resurfaced, as to what the client 
command name should be for magnum related commands. Naturally, we’d like to 
have the name “cluster” but that word is already in use by Senlin. Stephen 
opened a discussion with Dean Troyer about this, and found that “infra” might 
be a suitable name and began using that, and multiple team members are not 
satisfied with it. The name “magnum” was excluded from consideration because 
OSC aims to be project name agnostic. We know that no matter what word we pick, 
it’s not going to be ideal. I’ve added an agenda on our upcoming team meeting 
to judge community consensus about which alternative we should select:

https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2017-03-21_1600_UTC

Current choices on the table are:

  * c_cluster (possible abbreviation alias for container_infra_cluster)
  * coe_cluster
  * mcluster
  * infra

For example, our selected name would appear in “openstack …” commands. Such as:

$ openstack c_cluster create …

If you have input to share, I encourage you to reply to this thread, or come to 
the team meeting so we can consider your input before the team makes a 
selection.

Thanks,

Adrian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][barbican][castellan] Proposal to rename Castellan to oslo.keymanager

2017-03-20 Thread Dave McCowan (dmccowan)
+1 from me.  That looks easy to implement and maintain.

On 3/20/17, 2:49 PM, "Davanum Srinivas"  wrote:

>Dave,
>
>Here's the precendent from oslo.policy:
>https://review.openstack.org/#/admin/groups/556,members
>
>The reason for setting it up this way with individuals + oslo core +
>keystone core is to make sure both core teams are involved in the
>review process and any future contributors who are not part of either
>team can be give core rights in oslo.policy.
>
>Is it ok to continue this model?
>
>Thanks,
>Dims
>
>On Mon, Mar 20, 2017 at 9:20 AM, Dave McCowan (dmccowan)
> wrote:
>> This sounds good to me.  I see it as a "promotion" for Castellan into
>>the
>> core of OpenStack.  I think a good first step in this direction is to
>> create a castellan-drivers team in Launchpad and a castellan-core team
>>in
>> Gerrit.  We can seed the list with Barbican core reviewers and any Oslo
>> volunteers.
>>
>> The Barbican/Castellan weekly IRC meeting is today at 2000UTC in
>> #openstack-meeting-alt, if anyone want to join to discuss.
>>
>> Thanks!
>> dave-mccowan
>>
>> On 3/16/17, 12:43 PM, "Davanum Srinivas"  wrote:
>>
>>>+1 from me to bring castellan under Oslo governance with folks from
>>>both oslo and Barbican as reviewers without a project rename. Let's
>>>see if that helps get more adoption of castellan
>>>
>>>Thanks,
>>>Dims
>>>
>>>On Thu, Mar 16, 2017 at 12:25 PM, Farr, Kaitlin M.
>>> wrote:
 This thread has generated quite the discussion, so I will try to
 address a few points in this email, echoing a lot of what Dave said.

 Clint originally explained what we are trying to solve very well. The
hope was
 that the rename would emphasize that Castellan is just a basic
 interface that supports operations common between key managers
 (the existing Barbican back end and other back ends that may exist
 in the future), much like oslo.db supports the common operations
 between PostgreSQL and MySQL. The thought was that renaming to have
 oslo part of the name would help reinforce that it's just an
interface,
 rather than a standalone key manager. Right now, the only Castellan
 back end that would work in DevStack is Barbican. There has been talk
 in the past for creating other Castellan back ends (Vault or Tang),
but
 no one has committed to writing the code for those yet.

 The intended proposal was to rename the project, maintain the current
 review team (which is only a handful of Barbican people), and bring on
 a few Oslo folks, if any were available and interested, to give advice
 about (and +2s for) OpenStack library best practices. However, perhaps
 pulling it under oslo's umbrella without a rename is blessing it
enough.

 In response to Julien's proposal to make Castellan "the way you can do
 key management in Python" -- it would be great if Castellan were that
 abstract, but in practice it is pretty OpenStack-specific. Currently,
 the Barbican team is great at working on key management projects
 (including both Barbican and Castellan), but a lot of our focus now is
 how we can maintain and grow integration with the rest of the
OpenStack
 projects, for which having the name and expertise of oslo would be a
 great help.

 Thanks,

 Kaitlin

___
__
_
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>>--
>>>Davanum Srinivas :: https://twitter.com/dims
>>>
>>>
>>>__
>>>OpenStack Development Mailing List (not for usage questions)
>>>Unsubscribe: 
>>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>-- 
>Davanum Srinivas :: https://twitter.com/dims
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

[openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-20 Thread Adrian Otto
Team,

Stephen Watson has been working on an magnum feature to add magnum commands to 
the openstack client by implementing a plugin:

https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclient+osc

In review of this work, a question has resurfaced, as to what the client 
command name should be for magnum related commands. Naturally, we’d like to 
have the name “cluster” but that word is already in use by Senlin. Stephen 
opened a discussion with Dean Troyer about this, and found that “infra” might 
be a suitable name and began using that, and multiple team members are not 
satisfied with it. The name “magnum” was excluded from consideration because 
OSC aims to be project name agnostic. We know that no matter what word we pick, 
it’s not going to be ideal. I’ve added an agenda on our upcoming team meeting 
to judge community consensus about which alternative we should select:

https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2017-03-21_1600_UTC

Current choices on the table are:

  * c_cluster (possible abbreviation alias for container_infra_cluster)
  * coe_cluster
  * mcluster
  * infra

For example, our selected name would appear in “openstack …” commands. Such as:

$ openstack c_cluster create …

If you have input to share, I encourage you to reply to this thread, or come to 
the team meeting so we can consider your input before the team makes a 
selection.

Thanks,

Adrian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][barbican][castellan] Proposal to rename Castellan to oslo.keymanager

2017-03-20 Thread Davanum Srinivas
Dave,

Here's the precendent from oslo.policy:
https://review.openstack.org/#/admin/groups/556,members

The reason for setting it up this way with individuals + oslo core +
keystone core is to make sure both core teams are involved in the
review process and any future contributors who are not part of either
team can be give core rights in oslo.policy.

Is it ok to continue this model?

Thanks,
Dims

On Mon, Mar 20, 2017 at 9:20 AM, Dave McCowan (dmccowan)
 wrote:
> This sounds good to me.  I see it as a "promotion" for Castellan into the
> core of OpenStack.  I think a good first step in this direction is to
> create a castellan-drivers team in Launchpad and a castellan-core team in
> Gerrit.  We can seed the list with Barbican core reviewers and any Oslo
> volunteers.
>
> The Barbican/Castellan weekly IRC meeting is today at 2000UTC in
> #openstack-meeting-alt, if anyone want to join to discuss.
>
> Thanks!
> dave-mccowan
>
> On 3/16/17, 12:43 PM, "Davanum Srinivas"  wrote:
>
>>+1 from me to bring castellan under Oslo governance with folks from
>>both oslo and Barbican as reviewers without a project rename. Let's
>>see if that helps get more adoption of castellan
>>
>>Thanks,
>>Dims
>>
>>On Thu, Mar 16, 2017 at 12:25 PM, Farr, Kaitlin M.
>> wrote:
>>> This thread has generated quite the discussion, so I will try to
>>> address a few points in this email, echoing a lot of what Dave said.
>>>
>>> Clint originally explained what we are trying to solve very well. The
>>>hope was
>>> that the rename would emphasize that Castellan is just a basic
>>> interface that supports operations common between key managers
>>> (the existing Barbican back end and other back ends that may exist
>>> in the future), much like oslo.db supports the common operations
>>> between PostgreSQL and MySQL. The thought was that renaming to have
>>> oslo part of the name would help reinforce that it's just an interface,
>>> rather than a standalone key manager. Right now, the only Castellan
>>> back end that would work in DevStack is Barbican. There has been talk
>>> in the past for creating other Castellan back ends (Vault or Tang), but
>>> no one has committed to writing the code for those yet.
>>>
>>> The intended proposal was to rename the project, maintain the current
>>> review team (which is only a handful of Barbican people), and bring on
>>> a few Oslo folks, if any were available and interested, to give advice
>>> about (and +2s for) OpenStack library best practices. However, perhaps
>>> pulling it under oslo's umbrella without a rename is blessing it enough.
>>>
>>> In response to Julien's proposal to make Castellan "the way you can do
>>> key management in Python" -- it would be great if Castellan were that
>>> abstract, but in practice it is pretty OpenStack-specific. Currently,
>>> the Barbican team is great at working on key management projects
>>> (including both Barbican and Castellan), but a lot of our focus now is
>>> how we can maintain and grow integration with the rest of the OpenStack
>>> projects, for which having the name and expertise of oslo would be a
>>> great help.
>>>
>>> Thanks,
>>>
>>> Kaitlin
>>>
>>>_
>>>_
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>--
>>Davanum Srinivas :: https://twitter.com/dims
>>
>>__
>>OpenStack Development Mailing List (not for usage questions)
>>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][tripleo] initial discussion for a new periodic pipeline

2017-03-20 Thread Ben Nemec
As another data point, from Wednesday to Friday last week rh1 ran at 
full capacity pretty much round the clock.  There were experimental jobs 
that queued for at least 18 hours.


Granted, this is a symptom of a capacity problem we've exacerbated by 
adding the containers OVB job, but even without that the experimental 
jobs generally weren't getting run until after US working hours, at 
which point we're only a few hours from the regular periodic pipeline run.


On 03/19/2017 11:54 AM, Sagi Shnaidman wrote:

Hi, Paul
I would say that real worthwhile try starts from "normal" priority,
because we want to run promotion jobs more *often*, not more *rarely*
which happens with low priority.
In addition the initial idea in the first mail was running them each
after other almost, not once a day like it happens now or with "low"
priority.

Thanks

On Wed, Mar 15, 2017 at 11:16 PM, Paul Belanger > wrote:

On Wed, Mar 15, 2017 at 03:42:32PM -0500, Ben Nemec wrote:
>
>
> On 03/13/2017 02:29 PM, Sagi Shnaidman wrote:
> > Hi, all
> >
> > I submitted a change: https://review.openstack.org/#/c/443964/

> > but seems like it reached a point which requires an additional
discussion.
> >
> > I had a few proposals, it's increasing period to 12 hours
instead of 4
> > for start, and to leave it in regular periodic *low* precedence.
> > I think we can start from 12 hours period to see how it goes,
although I
> > don't think that 4 only jobs will increase load on OVB cloud, it's
> > completely negligible comparing to current OVB capacity and load.
> > But making its precedence as "low" IMHO completely removes any sense
> > from this pipeline to be, because we already run
experimental-tripleo
> > pipeline which this priority and it could reach timeouts like 7-14
> > hours. So let's assume we ran periodic job, it's queued to run
now 12 +
> > "low queue length" - about 20 and more hours. It's even worse
than usual
> > periodic job and definitely makes this change useless.
> > I'd like to notice as well that those periodic jobs unlike "usual"
> > periodic are used for repository promotion and their value are
equal or
> > higher than check jobs, so it needs to run with "normal" or even
"high"
> > precedence.
>
> Yeah, it makes no sense from an OVB perspective to add these as
low priority
> jobs.  Once in a while we've managed to chew through the entire
experimental
> queue during the day, but with the containers job added it's very
unlikely
> that's going to happen anymore.  Right now we have a 4.5 hour wait
time just
> for the check queue, then there's two hours of experimental jobs
queued up
> behind that.  All of which means if we started a low priority
periodic job
> right now it probably wouldn't run until about midnight my time,
which I
> think is when the regular periodic jobs run now.
>
Lets just give it a try? A 12 hour periodic job with low priority.
There is
nothing saying we cannot iterate on this after a few days / weeks /
months.

> >
> > Thanks
> >
> >
> > On Thu, Mar 9, 2017 at 10:06 PM, Wesley Hayutin

> > >> wrote:
> >
> >
> >
> > On Wed, Mar 8, 2017 at 1:29 PM, Jeremy Stanley

> > >> wrote:
> >
> > On 2017-03-07 10:12:58 -0500 (-0500), Wesley Hayutin wrote:
> > > The TripleO team would like to initiate a conversation
about the
> > > possibility of creating a new pipeline in Openstack
Infra to allow
> > > a set of jobs to run periodically every four hours
> > [...]
> >
> > The request doesn't strike me as
contentious/controversial. Why not
> > just propose your addition to the zuul/layout.yaml file
in the
> > openstack-infra/project-config repo and hash out any
resulting
> > concerns via code review?
> > --
> > Jeremy Stanley
> >
> >
> > Sounds good to me.
> > We thought it would be nice to walk through it in an email
first :)
> >
> > Thanks
> >
> >
> >
 __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> >
 openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

> >
 

Re: [openstack-dev] [kolla] rabbitmq cluster_partition_handling config in kolla-ansible

2017-03-20 Thread Sam Yaple
Hello Nikita,

There is no technical reason this cannot be made variable. I don't think
anyone could come up with a valid reason to block such a patch.

However, I would ask what you plan to gain from _not_ having it 'autoheal'?
The other options for partition handling are basically "let it partition
and do nothing" and "quarantine the partitioned node". Each of those
require an operator to take action. I have not personally known a single
OpenStack operator to ever go and recover a message from a partitioned
rabbitmq node and reinject it into the cluster. In fact, I do not know if
that would even be an advisable action given the retries that exist within
OpenStack. Not to mention the times when the resource was, say, a new port
in Neutron and you reinject the message after the VM consuming that port
was deleted.

With the reasons above, it is hard to justify anything but 'autoheal' for
OpenStack specifically. I certainly don't see any advantages.

Now that the ask has been made though, a variable would be 2 lines of code
in total, so I say go for it.

Thanks,
SamYaple

Sam Yaple

On Mon, Mar 20, 2017 at 2:43 PM, Nikita Gerasimov <
nikita.gerasi...@oracle.com> wrote:

> Hi,
>
> Since [1] kolla-ansible have rabbitmq cluster_partition_handling option
> hard-coded to 'autoheal'. According to [2] it's not a best mode for 3+ node
> clusters with reliable network.
> Is it reasonable to make this option changeable by user or even place some
> logic to pickup mode based on cluster structure?
> Or we have a reason to keep it hard-coded?
>
>
> [1] https://github.com/openstack/kolla-ansible/commit/0c6594c258
> 64d0c90cd0009726cee84967fe65dc
> [2] https://www.rabbitmq.com/partitions.html
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] rabbitmq cluster_partition_handling config in kolla-ansible

2017-03-20 Thread Michał Jastrzębski
Hello,

It would be extremely hard to detect "reliable" network...if it even exists:)
We use autohealing because we also use rabbitmq clusterer plugin,
which simplifies recovery from partition. As for hardcoded, one thing
I noticed is that these files indeed aren't overridable. I published
this bug [1] to fix it.

[1] https://bugs.launchpad.net/kolla-ansible/+bug/1674446

On 20 March 2017 at 07:43, Nikita Gerasimov  wrote:
> Hi,
>
> Since [1] kolla-ansible have rabbitmq cluster_partition_handling option
> hard-coded to 'autoheal'. According to [2] it's not a best mode for 3+ node
> clusters with reliable network.
> Is it reasonable to make this option changeable by user or even place some
> logic to pickup mode based on cluster structure?
> Or we have a reason to keep it hard-coded?
>
>
> [1]
> https://github.com/openstack/kolla-ansible/commit/0c6594c25864d0c90cd0009726cee84967fe65dc
> [2] https://www.rabbitmq.com/partitions.html
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-sfc] About insertion modes and SFC Encapsulation

2017-03-20 Thread Cathy Zhang
Hi Igor,

Moving the correlation from port-pair to port-pair-group makes sense. In the 
future I think we should add all new attributes for a SF to 
port-pair-group-param.

But I think L2/L3 is different from encap type NSH or MPLS. An L3 type SF can 
support either NSH or MPLS. I would suggest the following:

port-pair-group (port-pair-group-params):
insertion-mode:
- L2
- L3 (default)
   Correlation:
- MPLS
- NSH
tap-enabled:
- False (default)
- True

Thanks,
Cathy

From: Duarte Cardoso, Igor [mailto:igor.duarte.card...@intel.com]
Sent: Monday, March 20, 2017 8:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [networking-sfc] About insertion modes and SFC 
Encapsulation

Hi networking-sfc,

At the latest IRC meeting [1] it was agreed to split TAP from the possible 
insertion modes (initial spec version [2]).

I took the ARs to propose coexistence of insertion modes, correlation and (now) 
a new tap-enabled attribute, and send this email about possible directions.

Here are my thoughts, let me know yours:


1.   My expectation for future PP and PPG if TAP+insertion modes go ahead 
and nothing else changes (only relevant details outlined):

port-pair (service-function-params):
correlation:
- MPLS
- None (default)
port-pair-group (port-pair-group-params):
insertion-mode:
- L2
- L3 (default)
tap-enabled:
- False (default)
- True


2.   What I propose for future PP and PPG (only relevant details outlined):

port-pair (service-function-params):

port-pair-group (port-pair-group-params):
mode:
- L2
- L3 (default)
- MPLS
- NSH
tap-enabled:
- False (default)
- True

With what's proposed in 2.:
- every combination will be possible with no clashes and no validation required.
- port-pair-groups will always group "homogeneous" sets of port-pairs, making 
load-balacing and next-hop processing simpler and consistent.
- the "forwarding" details of a Service Function are no longer dictated both by 
port-pair and port-pair-group, but rather only by port-pair-group.

Are there any use cases for having next-hop SF candidates (individual 
port-pairs) supporting different SFC Encapsulation protocols?
I understand, however, that removing correlation from port-pairs might not be 
ideal given that it's a subtractive API change.

[1] 
http://eavesdrop.openstack.org/meetings/service_chaining/2017/service_chaining.2017-03-16-17.02.html
[2] https://review.openstack.org/#/c/442195/
[3] 
https://github.com/openstack/networking-sfc/blob/17c537b35d41a3e1fd80da790ae668e52cea6b88/doc/source/system_design%20and_workflow.rst#usage

Best regards,
Igor.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] [keystone] [federated auth] [ocata] federated users with "admin" role not authorized for nova, cinder, neutron admin panels

2017-03-20 Thread Evan Bollig PhD
Hey Boris,

Any updates on this?

Cheers,
-E
--
Evan F. Bollig, PhD
Scientific Computing Consultant, Application Developer | Scientific
Computing Solutions (SCS)
Minnesota Supercomputing Institute | msi.umn.edu
University of Minnesota | umn.edu
boll0...@umn.edu | 612-624-1447 | Walter Lib Rm 556


On Thu, Mar 9, 2017 at 4:08 PM, Evan Bollig PhD  wrote:
> Hey Boris,
>
> Which mapping? Hope you were looking for the shibboleth user
> mapping. Also, hope this is the right way to share the paste (first
> time using this):
> http://paste.openstack.org/show/3snCb31GRZfAuQxdRouy/
>
> Cheers,
> -E
> --
> Evan F. Bollig, PhD
> Scientific Computing Consultant, Application Developer | Scientific
> Computing Solutions (SCS)
> Minnesota Supercomputing Institute | msi.umn.edu
> University of Minnesota | umn.edu
> boll0...@umn.edu | 612-624-1447 | Walter Lib Rm 556
>
>
> On Thu, Mar 9, 2017 at 7:50 AM, Boris Bobrov  wrote:
>> Hi,
>>
>> Please paste your mapping to paste.openstack.org
>>
>> On 03/09/2017 02:07 AM, Evan Bollig PhD wrote:
>>> I am on Ocata with Shibboleth auth enabled. I noticed that Federated
>>> users with the admin role no longer have authorization to use the
>>> Admin** panels in Horizon related to Nova, Cinder and Neutron. All
>>> regular Identity and Project tabs function, and there are no problems
>>> with authorization for local admin users.
>>>
>>> -
>>> These Admin tabs work: Hypervisors, Host Aggregates, Flavors, Images,
>>> Defaults, Metadata, System Information
>>>
>>> These result in logout: Instances, Volumes, Networks, Routers, Floating IPs
>>>
>>> This is not present: Overview
>>> -
>>>
>>> The policies are vanilla from the CentOS/RDO openstack-dashboard RPMs:
>>> openstack-dashboard-11.0.0-1.el7.noarch
>>> python-django-horizon-11.0.0-1.el7.noarch
>>> python2-keystonemiddleware-4.14.0-1.el7.noarch
>>> python2-keystoneclient-3.10.0-1.el7.noarch
>>> openstack-keystone-11.0.0-1.el7.noarch
>>> python2-keystoneauth1-2.18.0-1.el7.noarch
>>> python-keystone-11.0.0-1.el7.noarch
>>>
>>> The errors I see in logs are similar to:
>>>
>>> ==> /var/log/horizon/horizon.log <==
>>> 2017-03-07 18:24:54,961 13745 ERROR horizon.exceptions Unauthorized:
>>> Traceback (most recent call last):
>>>   File 
>>> "/usr/share/openstack-dashboard/openstack_dashboard/dashboards/admin/floating_ips/views.py",
>>> line 53, in get_tenant_list
>>> tenants, has_more = api.keystone.tenant_list(request)
>>>   File "/usr/share/openstack-dashboard/openstack_dashboard/api/keystone.py",
>>> line 351, in tenant_list
>>> manager = VERSIONS.get_project_manager(request, admin=admin)
>>>   File "/usr/share/openstack-dashboard/openstack_dashboard/api/keystone.py",
>>> line 61, in get_project_manager
>>> manager = keystoneclient(*args, **kwargs).projects
>>>   File "/usr/share/openstack-dashboard/openstack_dashboard/api/keystone.py",
>>> line 170, in keystoneclient
>>> raise exceptions.NotAuthorized
>>> NotAuthorized
>>>
>>> Cheers,
>>> -E
>>> --
>>> Evan F. Bollig, PhD
>>> Scientific Computing Consultant, Application Developer | Scientific
>>> Computing Solutions (SCS)
>>> Minnesota Supercomputing Institute | msi.umn.edu
>>> University of Minnesota | umn.edu
>>> boll0...@umn.edu | 612-624-1447 | Walter Lib Rm 556
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [picasso] Bi-weekly meeting (March 21, 1700 UTC) canceled

2017-03-20 Thread Derek Schultz
Hello everyone,

Due to members of the team on vacation, the Picasso IRC meeting has been
canceled for this week. We will resume on April 4th.

Feel free to reach out in #openstack-functions in the meantime.

Regards,
Derek Schultz
Picasso PTL
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] Moving Gnocchi out

2017-03-20 Thread Ian Cordasco
On Mon, Mar 20, 2017 at 12:10 PM, gordon chung  wrote:
>
>
> On 20/03/17 11:37 AM, Thomas Goirand wrote:
>
>>
>> I really don't understand why the Telemetry team insists in being
>> release-independent, out of big tent and such, when the reality is that
>> all of released Telemetry components are *very tightly* bound to a
>> specific versions of OpenStack. IMO, it doesn't make sense upstream, or
>> downstream of Telemetry.
>
> i believe the tightly coupled perception between gnocchi+ceilometer is a
> misconception. ceilometer can be configured to output to various targets
> that are not gnocchi. based on dev questions in irc, this is a common
> workflow that people are actively leveraging. aodh and panko are
> definitely more bound to ceilometer as they don't have any other sources
> (currently).
>
>>
>> Now, having Gnocchi out of the OpenStack infra is to me a step in the
>> wrong direction. We should aim at full integration with the rest of
>> OpenStack, not getting out.
>>
>
> i should re-iterate, this won't change our testing or integration.
> ceilometer has a gate that ensures compatibility with gnocchi as a
> target. this will remain and the auto-scaling
> aodh+ceilometer+gnocchi+heat use case will continue to be validated. not
> sure how we can quantify/qualify 'full integration' but we remain
> committed to ensuring gnocchi+ceilometer works.
>
> the use case for gnocchi is generic. if you have to store a bunch of
> timestamp+value data, use gnocchi. the use case definitely fits
> openstack's requirement, but i believe you can see it isn't just limited
> to that.
>
> i'm glad we have your opinion here, i had previously asked jd about
> effects on packaging and while i think Red Hat has a plan already, it'd
> be interesting to get your feedback on how this will affects other distros.

Keep in mind, that OpenStack inside of Debian is just Thomas for a
variety of reasons. Others have tried to help and are trying to help
and aren't really able to stick around.

The effects on downstreams shouldn't be significant. People packaging
Ceilometer likely already package Gnocchi. How those packagers choose
to consume deliverables is what will change. In a similar vein,
OpenStack Infra has a signing key for each release cycle that Gnocchi
is likely currently signed with when tarballs are released. You may
receive complaints that new releases aren't verifiable in the same
way, but that and needing a decent MANIFEST.in (assuming you're also
dropping your usage of PBR) will probably be the largest problems
(beyond where downstreams get the actual deliverable).

In the end, I think this should inform your decision but not make it.
-- 
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] Moving Gnocchi out

2017-03-20 Thread gordon chung


On 20/03/17 11:37 AM, Thomas Goirand wrote:

>
> I really don't understand why the Telemetry team insists in being
> release-independent, out of big tent and such, when the reality is that
> all of released Telemetry components are *very tightly* bound to a
> specific versions of OpenStack. IMO, it doesn't make sense upstream, or
> downstream of Telemetry.

i believe the tightly coupled perception between gnocchi+ceilometer is a 
misconception. ceilometer can be configured to output to various targets 
that are not gnocchi. based on dev questions in irc, this is a common 
workflow that people are actively leveraging. aodh and panko are 
definitely more bound to ceilometer as they don't have any other sources 
(currently).

>
> Now, having Gnocchi out of the OpenStack infra is to me a step in the
> wrong direction. We should aim at full integration with the rest of
> OpenStack, not getting out.
>

i should re-iterate, this won't change our testing or integration. 
ceilometer has a gate that ensures compatibility with gnocchi as a 
target. this will remain and the auto-scaling 
aodh+ceilometer+gnocchi+heat use case will continue to be validated. not 
sure how we can quantify/qualify 'full integration' but we remain 
committed to ensuring gnocchi+ceilometer works.

the use case for gnocchi is generic. if you have to store a bunch of 
timestamp+value data, use gnocchi. the use case definitely fits 
openstack's requirement, but i believe you can see it isn't just limited 
to that.

i'm glad we have your opinion here, i had previously asked jd about 
effects on packaging and while i think Red Hat has a plan already, it'd 
be interesting to get your feedback on how this will affects other distros.

cheers,

-- 
gord

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] Moving Gnocchi out

2017-03-20 Thread Jeremy Stanley
On 2017-03-20 12:57:17 -0400 (-0400), Ian Cordasco wrote:
[...]
> Contributing to OpenStack is intimidating, if not utterly
> discouraging, to people unfamiliar with CLAs and Gerrit. There's a lot
> of process that goes into contributing. Moving this to a friendlier
> (if not inferior) developer platform makes sense if there is interest
> from companies not interested in participating in the OpenStack
> community.

Agreed. Granted these are all things I think we can fix in time (and
we do have ideas or plans to address them), but it's taking a while
to turn the boat around and I can't blame projects for not wanting
to continue waiting it out.

Another point JD brought up in his review response to similar
questions I posed is that a lot of people see OpenStack projects
(rightly or wrongly) as tightly intertwined and assume that to use
any one service you need (at least some of) the others too. This was
a major takeaway from the joint BoD/TC/UC meeting earlier this
month, and the hope is that in the future we'll work collectively
toward making it easier for services to stand on their own and serve
independent use cases (as well as the use cases they serve in
concert today). Some OpenStack services are already there and others
are getting there, so it's a trend we're seeing start to solve
itself anyway.
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] Moving Gnocchi out

2017-03-20 Thread Ian Cordasco
-Original Message-
From: Chris Friesen 
Reply: OpenStack Development Mailing List (not for usage questions)

Date: March 20, 2017 at 11:39:38
To: openstack-dev@lists.openstack.org 
Subject:  Re: [openstack-dev] [telemetry] Moving Gnocchi out

> On 03/20/2017 10:10 AM, Chris Dent wrote:
> > On Mon, 20 Mar 2017, Thomas Goirand wrote:
> >
> >> I really don't understand why the Telemetry team insists in being
> >> release-independent, out of big tent and such, when the reality is that
> >> all of released Telemetry components are *very tightly* bound to a
> >> specific versions of OpenStack. IMO, it doesn't make sense upstream, or
> >> downstream of Telemetry.
> >
> > This simply isn't the case with gnocchi. Gnocchi is an independent
> > timeseries, metrics and resources data service that _happens_ to
> > work with OpenStack.
> >
> > By making it independent of OpenStack, its ability to draw
> > contribution and engagement from people outside the OpenStack
> > community increases. As a result it can become a better tool for
> > more people, including OpenStack people. Not all, or even many, of
> > the OpenStack projects are like that, but gnocchi is. More eyes,
> > less bugs, right?
>
> I'm curious why being independent of OpenStack would make it more attractive.
>
> Is the perception that requiring people to sign the Contributor Agreement is
> holding back external contribution? Or is it just that the mere idea of it
> being an OpenStack project is discouraging people from getting involved?
>
> Just as an example, if I want to get involved with libvirt because I have an
> itch to scratch the fact that it's basically a RedHat project isn't going to
> turn me off...

Contributing to OpenStack is intimidating, if not utterly
discouraging, to people unfamiliar with CLAs and Gerrit. There's a lot
of process that goes into contributing. Moving this to a friendlier
(if not inferior) developer platform makes sense if there is interest
from companies not interested in participating in the OpenStack
community.

--
Ian Cordasco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] volunteers for cross project liaisons

2017-03-20 Thread Loo, Ruby
Hi Solio,

Thanks for volunteering to be the liaison for the logging working group. The 
project that are listed there, are the ones that have liaisons with the working 
group. It gives everyone an idea of which projects have liaisons. So your name 
will be added there shortly, as the ironic liaison :)

There is a section in our etherpad [1] (look for 'Cross-project'), where you 
could add anything of interest (i.e. anything from the logging working group 
that impacts ironic). We look at (and discuss) these during our weekly ironic 
meetings [2].

--ruby

[1] https://etherpad.openstack.org/p/IronicWhiteBoard
[2] https://wiki.openstack.org/wiki/Meetings/Ironic

From: "Sarabia, Solio" 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Monday, March 20, 2017 at 12:22 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [ironic] volunteers for cross project liaisons

Hi Ruby.
I’d like to volunteer for logging, in case ironic still needs a liaison with 
this group. In the CrossProjectLiaisons wiki [1], for the logging group, only a 
few projects are listed. Does this mean that the projects listed there are the 
only needing help? Or as people volunteer, her/his name are added to the list?

Btw, I’m relatively new to ironic as well, and would like to contribute in 
areas where ironic needs help.

-Solio

From: Rushil Chugh 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Saturday, March 18, 2017 at 9:17 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [ironic] volunteers for cross project liaisons

Hi Ruby,

I am still new to the Ironic project and want to get more involved. I would 
like to be the oslo project liason as this would give me an opportunity to 
become more active within the Ironic community. I have worked with multiple 
OpenStack projects in prior jobs and have a few connections within the Oslo 
community. This would give me valuable experience learning more about the Oslo 
project and I believe I have the time commitment to take this on.

Thanks,
Rushil

On Wed, Mar 15, 2017 at 11:11 AM, Loo, Ruby 
> wrote:
Hi,

The ironic community is looking for volunteers to be cross-project liaisons [1] 
for these projects:
- oslo
- logging working group
- i18n

The expectations are documented in [1] on a per-project basis. The amount of 
commitment varies depending on the project (and I don't know what that might 
be).

[insert here why it would be an awesome experience for you, fame, fortune, ... 
:D]

--ruby

[1] https://wiki.openstack.org/wiki/CrossProjectLiaisons

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Rushil Chugh
MS - Computer Networks
North Carolina State University
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] volunteers for cross project liaisons

2017-03-20 Thread Loo, Ruby
Hi Rushil,

We (or I am) are happy that you are volunteering to be the oslo project 
liaison. Thanks!

There is a section in our etherpad [1] (look for 'Cross-project'), where you 
could add anything of interest (i.e. anything from oslo that impacts ironic). 
We look at (and discuss) these during our weekly ironic meetings [2].

--ruby

[1] https://etherpad.openstack.org/p/IronicWhiteBoard
[2] https://wiki.openstack.org/wiki/Meetings/Ironic

From: Rushil Chugh 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Saturday, March 18, 2017 at 10:17 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [ironic] volunteers for cross project liaisons

Hi Ruby,

I am still new to the Ironic project and want to get more involved. I would 
like to be the oslo project liason as this would give me an opportunity to 
become more active within the Ironic community. I have worked with multiple 
OpenStack projects in prior jobs and have a few connections within the Oslo 
community. This would give me valuable experience learning more about the Oslo 
project and I believe I have the time commitment to take this on.

Thanks,
Rushil

On Wed, Mar 15, 2017 at 11:11 AM, Loo, Ruby 
> wrote:
Hi,

The ironic community is looking for volunteers to be cross-project liaisons [1] 
for these projects:
- oslo
- logging working group
- i18n

The expectations are documented in [1] on a per-project basis. The amount of 
commitment varies depending on the project (and I don't know what that might 
be).

[insert here why it would be an awesome experience for you, fame, fortune, ... 
:D]

--ruby

[1] https://wiki.openstack.org/wiki/CrossProjectLiaisons

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Rushil Chugh
MS - Computer Networks
North Carolina State University
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] volunteers for cross project liaisons

2017-03-20 Thread Loo, Ruby
Hi Jay,

That makes a lot of sense and thank YOU for being the i18n liaison! :D

--ruby

On 2017-03-15, 11:18 AM, "Jay Faulkner"  wrote:


> On Mar 15, 2017, at 8:11 AM, Loo, Ruby  wrote:
> 
> Hi,
> 
> The ironic community is looking for volunteers to be cross-project 
liaisons [1] for these projects:
> - oslo
> - logging working group
> - i18n

The i18n and docs projects are closely related. I also don’t think they do 
a lot of translating for ironic. Unless we have a contributor who utilizes i18n 
and is more familiar, I can take this on.

-Jay
> 
> The expectations are documented in [1] on a per-project basis. The amount 
of commitment varies depending on the project (and I don't know what that might 
be).
> 
> [insert here why it would be an awesome experience for you, fame, 
fortune, ... :D]
> 
> --ruby
> 
> [1] https://wiki.openstack.org/wiki/CrossProjectLiaisons
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [vitrage] vitrage Resource API

2017-03-20 Thread Trinath Somanchi
In tests?
Get Outlook for iOS


From: dong.wenj...@zte.com.cn 
Sent: Monday, March 20, 2017 2:49:57 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [vitrage] vitrage Resource API


Hi All,



I noticed that the APIs of `resource list` and `resource show`  were mocked.

Is  there any backgroud for the mock or the API is not necessary?



BR,

dwj




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] Moving Gnocchi out

2017-03-20 Thread Chris Friesen

On 03/20/2017 10:10 AM, Chris Dent wrote:

On Mon, 20 Mar 2017, Thomas Goirand wrote:


I really don't understand why the Telemetry team insists in being
release-independent, out of big tent and such, when the reality is that
all of released Telemetry components are *very tightly* bound to a
specific versions of OpenStack. IMO, it doesn't make sense upstream, or
downstream of Telemetry.


This simply isn't the case with gnocchi. Gnocchi is an independent
timeseries, metrics and resources data service that _happens_ to
work with OpenStack.

By making it independent of OpenStack, its ability to draw
contribution and engagement from people outside the OpenStack
community increases. As a result it can become a better tool for
more people, including OpenStack people. Not all, or even many, of
the OpenStack projects are like that, but gnocchi is. More eyes,
less bugs, right?


I'm curious why being independent of OpenStack would make it more attractive.

Is the perception that requiring people to sign the Contributor Agreement is 
holding back external contribution?  Or is it just that the mere idea of it 
being an OpenStack project is discouraging people from getting involved?


Just as an example, if I want to get involved with libvirt because I have an 
itch to scratch the fact that it's basically a RedHat project isn't going to 
turn me off...


Thanks,
Chris

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] volunteers for cross project liaisons

2017-03-20 Thread Sarabia, Solio
Hi Ruby.
I’d like to volunteer for logging, in case ironic still needs a liaison with 
this group. In the CrossProjectLiaisons wiki [1], for the logging group, only a 
few projects are listed. Does this mean that the projects listed there are the 
only needing help? Or as people volunteer, her/his name are added to the list?

Btw, I’m relatively new to ironic as well, and would like to contribute in 
areas where ironic needs help.

-Solio

From: Rushil Chugh 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Saturday, March 18, 2017 at 9:17 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [ironic] volunteers for cross project liaisons

Hi Ruby,

I am still new to the Ironic project and want to get more involved. I would 
like to be the oslo project liason as this would give me an opportunity to 
become more active within the Ironic community. I have worked with multiple 
OpenStack projects in prior jobs and have a few connections within the Oslo 
community. This would give me valuable experience learning more about the Oslo 
project and I believe I have the time commitment to take this on.

Thanks,
Rushil

On Wed, Mar 15, 2017 at 11:11 AM, Loo, Ruby 
> wrote:
Hi,

The ironic community is looking for volunteers to be cross-project liaisons [1] 
for these projects:
- oslo
- logging working group
- i18n

The expectations are documented in [1] on a per-project basis. The amount of 
commitment varies depending on the project (and I don't know what that might 
be).

[insert here why it would be an awesome experience for you, fame, fortune, ... 
:D]

--ruby

[1] https://wiki.openstack.org/wiki/CrossProjectLiaisons

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Rushil Chugh
MS - Computer Networks
North Carolina State University
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [dib] diskimage-builder v2 RC1 release; request for test

2017-03-20 Thread Mikhail Medvedev
On Fri, Mar 17, 2017 at 3:23 PM, Andre Florath  wrote:
> Hello!
>
> Thanks for the bug report. Can you please file this as a bug?

Hi Andre,
Submitted the bug https://bugs.launchpad.net/diskimage-builder/+bug/1674402

>
> There is a very high probability that I introduced a change that
> leads to the failure [1] - even if this is fixed now there is a
> high probability that it will fail again when [2] is merged.
>
> The reason is, that there are no test cases because there is no
> nodepool CI job running on PPC. (Or do I miss something here?)

Correct, there isn't a ppc CI running on diskimage-builder patches.

>
> We are only a very few people at diskimage-builder with limited
> resources and had to concentrate on the 'main-line' (i.e.: that
> what can be tested by us). A discussion about what to support
> or test was already started some time ago [3].
>
> Looks that you are from IBM: would it be possible to provide
> PPC hardware for testing and the man-power to integrate
> this into the CI?
> This would really help finding such problems during development
> phase and would put me into the situation to be able to fix your
> problem.
>

Agreed, there is little can be done without being able to test the failure case.

Would adding a third-party CI job help? I can put together a
functional job on ppc64. I assume we want a job based on
gate-dib-dsvm-functests-*?

> Kind regards
>
> Andre
>
> [1] https://review.openstack.org/#/c/375261/
> [2] https://review.openstack.org/#/c/444586/
> [3] https://review.openstack.org/#/c/418204/
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] Moving Gnocchi out

2017-03-20 Thread Chris Dent

On Mon, 20 Mar 2017, Thomas Goirand wrote:


I really don't understand why the Telemetry team insists in being
release-independent, out of big tent and such, when the reality is that
all of released Telemetry components are *very tightly* bound to a
specific versions of OpenStack. IMO, it doesn't make sense upstream, or
downstream of Telemetry.


This simply isn't the case with gnocchi. Gnocchi is an independent
timeseries, metrics and resources data service that _happens_ to
work with OpenStack.

By making it independent of OpenStack, its ability to draw
contribution and engagement from people outside the OpenStack
community increases. As a result it can become a better tool for
more people, including OpenStack people. Not all, or even many, of
the OpenStack projects are like that, but gnocchi is. More eyes,
less bugs, right?

--
Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry] Moving Gnocchi out

2017-03-20 Thread Thomas Goirand
On 03/20/2017 11:24 AM, Julien Danjou wrote:
> Hi,
> 
> After a lot of talk within the Gnocchi team, it appeared to us that
> Gnocchi, which has been wisely tagged as 'independent' since its
> inception

I'm not sure I find this decision as fine as you do.

> has a lot of potential usage outside of OpenStack directly.

Any example?

> Being part of the big tent helped the project to be built, but it now
> appears that it restrains its adoption and contribution from users
> outside of the OpenStack realm.

Why?

> Therefore, the Gnocchi team has decided to move the project outside of
> the OpenStack Big Tent. As a first step, a patch has been submitted to
> the governance to delist the project from Telemetry:
> 
>   https://review.openstack.org/447438
> 
> As a second step, the project will likely move out of the OpenStack
> infrastructure in the future.

Of course, this is only the voice of an outsider, "only" doing the
packaging of the finalized code, so excuse me if it sounds like I don't
understand anything (probably it's the case). But...

I really don't understand why the Telemetry team insists in being
release-independent, out of big tent and such, when the reality is that
all of released Telemetry components are *very tightly* bound to a
specific versions of OpenStack. IMO, it doesn't make sense upstream, or
downstream of Telemetry.

Now, having Gnocchi out of the OpenStack infra is to me a step in the
wrong direction. We should aim at full integration with the rest of
OpenStack, not getting out.

Cheers,

Thomas Goirand (zigo)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-docs] [tripleo] Creating official Deployment guide for TripleO

2017-03-20 Thread Carlos Camacho Gonzalez
Hey,

I'll like to collaborate, please, just let me know what can I do to help
with this task.

Might be a good idea to have in the blueprint a list of tasks?

Also, I think this can be called Deployment/Upgrade guide for TripleO.

Cheers,
Carlos.



On Mon, Mar 20, 2017 at 3:26 PM, Sanjay Upadhyay 
wrote:

>
>
> On Mon, Mar 20, 2017 at 5:31 PM, Emilien Macchi 
> wrote:
>
>> I proposed a blueprint to track the work done:
>>
>> https://blueprints.launchpad.net/tripleo/+spec/tripleo-deploy-guide
>> Target: pike-3
>>
>> Volunteers to work on it with me, please let me know.
>>
>
> Please add me (irc handle - saneax), I am interested on this.
>
> regards
> /sanjay
>
>
>> Thanks,
>>
>> On Tue, Mar 14, 2017 at 7:00 AM, Alexandra Settle 
>> wrote:
>> > Hey Emilien,
>> >
>> > You pretty much covered it all! Docs team is happy to provide guidance,
>> but in reality, it should be a fairly straight forward process.
>> >
>> > The Kolla team just completed their deploy-guide patches and were able
>> to help refine the process a bit further. Hopefully this should help the
>> TripleO team :)
>> >
>> > Reach out if you have any questions at all :)
>> >
>> > Thanks,
>> >
>> > Alex
>> >
>> > On 3/13/17, 10:32 PM, "Emilien Macchi"  wrote:
>> >
>> > Team,
>> >
>> > [adding Alexandra, OpenStack Docs PTL]
>> >
>> > It seems like there is a common interest in pushing deployment
>> guides
>> > for different OpenStack Deployment projects: OSA, Kolla.
>> > The landing page is here:
>> > https://docs.openstack.org/project-deploy-guide/newton/
>> >
>> > And one example:
>> > https://docs.openstack.org/project-deploy-guide/openstack-
>> ansible/newton/
>> >
>> > I think this is pretty awesome and it would bring more visibility
>> for
>> > TripleO project, and help our community to find TripleO
>> documentation
>> > from a consistent place.
>> >
>> > The good news, is that openstack-docs team built a pretty solid
>> > workflow to make that happen:
>> > https://docs.openstack.org/contributor-guide/project-deploy
>> -guide.html
>> > And we don't need to create new repos or do any crazy changes. It
>> > would probably be some refactoring and sphinx things.
>> >
>> > Alexandra, please add any words if I missed something obvious.
>> >
>> > Feedback from the team would be welcome here before we engage any
>> work,
>> >
>> > Thanks!
>> > --
>> > Emilien Macchi
>> >
>> >
>>
>>
>>
>> --
>> Emilien Macchi
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Ironic review party invitation

2017-03-20 Thread Mario Villaplana
Hi ironic team,

I'm proposing that we move this to 20:00 UTC Thursdays (8pm UTC, 4pm
in my timezone which is US Eastern). I think this will better fit with
the daylight saving time adjustment that most attendees had last week.
Please message me here or in IRC if there are objections. I've updated
the etherpad at https://etherpad.openstack.org/p/ironic-review-party
to reflect this new time.

I am now back from vacation and will be around for this week's review
party. Thanks to whoever led last week's review party.

Mario

On Tue, Feb 28, 2017 at 2:14 PM, Mario Villaplana
 wrote:
> Hi ironic team,
>
> Last cycle, I started holding some informal "review parties" for new
> members of OSIC [0] to come together with other OSIC people who had
> been working on OpenStack for a while and have group discussions about
> technical topics related to ironic and generally how to be more
> effective upstream community members.
>
> People found this decently helpful, with a weighted average of 3.6
> when polled for feedback on the usefulness of the event on a 1-5
> scale.
>
> As a result of some of this feedback, I thought it'd be good to invite
> the whole ironic community to participate in this weekly event.
>
> It'll now be happening Thursdays at 2100 UTC in a Google Hangouts room
> so that more community members can join. Here's an etherpad with the
> Hangouts link and more information about the event:
> https://etherpad.openstack.org/p/ironic-review-party
>
> This is a totally informal, unofficial event. Feel free to put any
> topic you'd like to discuss with a larger audience on that etherpad in
> the section for the current week. I'll be looking over whatever's on
> that list 24 hours in advance of the party to get adequately prepared,
> but feel free to show up with other topics, too.
>
> Thanks!
>
> Mario
>
> [0] https://osic.org/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [networking-sfc] About insertion modes and SFC Encapsulation

2017-03-20 Thread Duarte Cardoso, Igor
Hi networking-sfc,

At the latest IRC meeting [1] it was agreed to split TAP from the possible 
insertion modes (initial spec version [2]).

I took the ARs to propose coexistence of insertion modes, correlation and (now) 
a new tap-enabled attribute, and send this email about possible directions.

Here are my thoughts, let me know yours:


1.   My expectation for future PP and PPG if TAP+insertion modes go ahead 
and nothing else changes (only relevant details outlined):

port-pair (service-function-params):
correlation:
- MPLS
- None (default)
port-pair-group (port-pair-group-params):
insertion-mode:
- L2
- L3 (default)
tap-enabled:
- False (default)
- True


2.   What I propose for future PP and PPG (only relevant details outlined):

port-pair (service-function-params):

port-pair-group (port-pair-group-params):
mode:
- L2
- L3 (default)
- MPLS
- NSH
tap-enabled:
- False (default)
- True

With what's proposed in 2.:
- every combination will be possible with no clashes and no validation required.
- port-pair-groups will always group "homogeneous" sets of port-pairs, making 
load-balacing and next-hop processing simpler and consistent.
- the "forwarding" details of a Service Function are no longer dictated both by 
port-pair and port-pair-group, but rather only by port-pair-group.

Are there any use cases for having next-hop SF candidates (individual 
port-pairs) supporting different SFC Encapsulation protocols?
I understand, however, that removing correlation from port-pairs might not be 
ideal given that it's a subtractive API change.

[1] 
http://eavesdrop.openstack.org/meetings/service_chaining/2017/service_chaining.2017-03-16-17.02.html
[2] https://review.openstack.org/#/c/442195/
[3] 
https://github.com/openstack/networking-sfc/blob/17c537b35d41a3e1fd80da790ae668e52cea6b88/doc/source/system_design%20and_workflow.rst#usage

Best regards,
Igor.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] New mascot design

2017-03-20 Thread Mario Villaplana
+1, this looks like a nicely stylized version of PXE Boots. Thanks!

Mario

On Mon, Mar 13, 2017 at 3:25 PM, Jim Rollenhagen  
wrote:
>
> On Fri, Mar 10, 2017 at 11:28 AM, Heidi Joy Tretheway
>  wrote:
>>
>> Hi Ironic team,
>> Here’s an update on your project logo. Our illustrator tried to be as true
>> as possible to your original, while ensuring it matched the line weight,
>> color palette and style of the rest. Thanks for your patience as we worked
>> on this! Feel free to direct feedback to me; we really want to get this
>> right for you.
>
>
> This is fantastic! Thank you for putting up with us, I think it turned out
> well in the end.
>
> // jim
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic] Hardware console processes in multi-conductor environment

2017-03-20 Thread Mario Villaplana
Hi Yuriy,

I like idea #4 (building task management functionality into a separate
console driver). I think this was suggested at the PTG, and it's good
because it fits into the existing model ironic has for handling
console.

Thanks,
Mario

On Fri, Mar 10, 2017 at 10:42 AM, Yuriy Zveryanskyy
 wrote:
> Hi all.
>
> Hardware nodes consoles have some specific: limited number of
>
> concurrent console sessions (often to 1) that can be established.
>
> There are some issues (described below) due to conflict between
>
> distributed ironic conductors services and local console processes.
>
> This affect only case with local console processes, currently
>
> shellinabox and socat for example.
>
> There are some possible "global" solutions:
>
> 1) Pluggable internal task API [1], currently rejected by community;
>
> 2) Non-pluggable internal task API that uses external service (there
>
> is not necessary service currently in OpenStack);
>
> 3) Custom distributed process management based on ssh access
>
> between ironic conductor hosts (looks like a hack);
>
> 4) New console interface drivers which implements tasks management
>
> internally (like "k8s_shellinabox", "k8s_socat").
>
> And partial solutions (some of them proposed below) are possible.
>
> In multi-conductor environment ironic conductor process can be
>
> died/stopped/blocked (removed) or started/restarted (added).
>
> Possible cases:
>
> 1) Conductor removed
>
> a) gracefully stopped. Some daemon processes like shellinabox
>
> for consoles can continue to run. This issue can be fixed currently
>
> as separate bug.
>
> b) died/killed. Daemon processes can continue to run. This issue can
>
> be fixed only by distributed tasks management ("global" solutions above).
>
> c) all host with conductor died. No fix needed.
>
> 2) Conductor added/restarted
>
> New conductor try to start processes for enabled consoles, but currently
>
> processes on conductor hosts that works with these nodes before not
>
> stopped [2]. I see two possible solution for this issue:
>
> 1) "Untakeover" periodic task for stopping console processes.
>
> For this solution we should not stop non-local consoles.
>
> 2) Do not stop process on old conductor. Use redefined RPC routing
>
> (based on saved into DB conductor that started console) on API side
>
> for set console and wait stopping via API. This routing should also
>
> ignore dead conductors.
>
>
> If you have some ideas please leave comments.
>
>
> [1] https://review.openstack.org/#/c/431605/
>
> [2] https://bugs.launchpad.net/ironic/+bug/1632192
>
>
> Yuriy Zveryanskyy
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Propose Attila Darazs and Gabriele Cerami for tripleo-ci core

2017-03-20 Thread Emilien Macchi
On Wed, Mar 15, 2017 at 11:44 AM, John Trowbridge  wrote:
> Both Attila and Gabriele have been rockstars with the work to transition
> tripleo-ci to run via quickstart, and both have become extremely
> knowledgeable about how tripleo-ci works during that process. They are
> both very capable of providing thorough and thoughtful reviews of
> tripleo-ci patches.
>
> On top of this Attila has greatly increased the communication from the
> tripleo-ci squad as the liason, with weekly summary emails of our
> meetings to this list.

Indeed, both Attila and Gabriele were extremely involved in TripleO CI
/ quickstart transition, and very responsive when things went wrong.
Their number of reviews in tripleo-ci repo is not that high but I
suspect that it's related to the recent move to
tripleo-quickstack-extras where some CI bits are now.
Though I would expect them to keep increasing the number of review on
this repository and keep being more and more involved in TripleO CI.

So +1 to both and thanks again for your hard work!
I'm very happy to see the CI team growing, it's really awesome.

> - trown
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [kolla] rabbitmq cluster_partition_handling config in kolla-ansible

2017-03-20 Thread Nikita Gerasimov

Hi,

Since [1] kolla-ansible have rabbitmq cluster_partition_handling option 
hard-coded to 'autoheal'. According to [2] it's not a best mode for 3+ 
node clusters with reliable network.
Is it reasonable to make this option changeable by user or even place 
some logic to pickup mode based on cluster structure?

Or we have a reason to keep it hard-coded?


[1] 
https://github.com/openstack/kolla-ansible/commit/0c6594c25864d0c90cd0009726cee84967fe65dc

[2] https://www.rabbitmq.com/partitions.html


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-docs] [tripleo] Creating official Deployment guide for TripleO

2017-03-20 Thread Sanjay Upadhyay
On Mon, Mar 20, 2017 at 5:31 PM, Emilien Macchi  wrote:

> I proposed a blueprint to track the work done:
>
> https://blueprints.launchpad.net/tripleo/+spec/tripleo-deploy-guide
> Target: pike-3
>
> Volunteers to work on it with me, please let me know.
>

Please add me (irc handle - saneax), I am interested on this.

regards
/sanjay


> Thanks,
>
> On Tue, Mar 14, 2017 at 7:00 AM, Alexandra Settle 
> wrote:
> > Hey Emilien,
> >
> > You pretty much covered it all! Docs team is happy to provide guidance,
> but in reality, it should be a fairly straight forward process.
> >
> > The Kolla team just completed their deploy-guide patches and were able
> to help refine the process a bit further. Hopefully this should help the
> TripleO team :)
> >
> > Reach out if you have any questions at all :)
> >
> > Thanks,
> >
> > Alex
> >
> > On 3/13/17, 10:32 PM, "Emilien Macchi"  wrote:
> >
> > Team,
> >
> > [adding Alexandra, OpenStack Docs PTL]
> >
> > It seems like there is a common interest in pushing deployment guides
> > for different OpenStack Deployment projects: OSA, Kolla.
> > The landing page is here:
> > https://docs.openstack.org/project-deploy-guide/newton/
> >
> > And one example:
> > https://docs.openstack.org/project-deploy-guide/
> openstack-ansible/newton/
> >
> > I think this is pretty awesome and it would bring more visibility for
> > TripleO project, and help our community to find TripleO documentation
> > from a consistent place.
> >
> > The good news, is that openstack-docs team built a pretty solid
> > workflow to make that happen:
> > https://docs.openstack.org/contributor-guide/project-
> deploy-guide.html
> > And we don't need to create new repos or do any crazy changes. It
> > would probably be some refactoring and sphinx things.
> >
> > Alexandra, please add any words if I missed something obvious.
> >
> > Feedback from the team would be welcome here before we engage any
> work,
> >
> > Thanks!
> > --
> > Emilien Macchi
> >
> >
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] [classifier] New CCF meeting time decided (next tomorrow)

2017-03-20 Thread Duarte Cardoso, Igor
Hi all,

After [1], a new meeting time/day has been decided.

With 6 votes, Tuesdays at 1400 UTC was the most wanted time slot [2].

Eavesdrop has been updated to reflect this [3] and all up-do-date information 
can be found at [4].

Due to conflicting times at the IRC channel, the meeting also switched from odd 
to even weeks, meaning that the next meeting will be tomorrow (Match 21st) at 
1400UTC.

Let me also take this opportunity to invite you to review the Common 
Classification Framework spec [5] and the early PoC code based on 
neutron/classifier.

[1] http://lists.openstack.org/pipermail/openstack-dev/2017-March/113370.html
[2] http://doodle.com/poll/9s2meppbkdg7ya7e (as of today)
[3] https://review.openstack.org/#/c/441068/
[4] http://eavesdrop.openstack.org/#Neutron_Common_Classification_Framework
[5] https://review.openstack.org/#/c/333993/
[6] https://review.openstack.org/#/c/445577/

Best regards,
Igor.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][barbican][castellan] Proposal to rename Castellan to oslo.keymanager

2017-03-20 Thread Dave McCowan (dmccowan)
This sounds good to me.  I see it as a "promotion" for Castellan into the
core of OpenStack.  I think a good first step in this direction is to
create a castellan-drivers team in Launchpad and a castellan-core team in
Gerrit.  We can seed the list with Barbican core reviewers and any Oslo
volunteers.

The Barbican/Castellan weekly IRC meeting is today at 2000UTC in
#openstack-meeting-alt, if anyone want to join to discuss.

Thanks!
dave-mccowan

On 3/16/17, 12:43 PM, "Davanum Srinivas"  wrote:

>+1 from me to bring castellan under Oslo governance with folks from
>both oslo and Barbican as reviewers without a project rename. Let's
>see if that helps get more adoption of castellan
>
>Thanks,
>Dims
>
>On Thu, Mar 16, 2017 at 12:25 PM, Farr, Kaitlin M.
> wrote:
>> This thread has generated quite the discussion, so I will try to
>> address a few points in this email, echoing a lot of what Dave said.
>>
>> Clint originally explained what we are trying to solve very well. The
>>hope was
>> that the rename would emphasize that Castellan is just a basic
>> interface that supports operations common between key managers
>> (the existing Barbican back end and other back ends that may exist
>> in the future), much like oslo.db supports the common operations
>> between PostgreSQL and MySQL. The thought was that renaming to have
>> oslo part of the name would help reinforce that it's just an interface,
>> rather than a standalone key manager. Right now, the only Castellan
>> back end that would work in DevStack is Barbican. There has been talk
>> in the past for creating other Castellan back ends (Vault or Tang), but
>> no one has committed to writing the code for those yet.
>>
>> The intended proposal was to rename the project, maintain the current
>> review team (which is only a handful of Barbican people), and bring on
>> a few Oslo folks, if any were available and interested, to give advice
>> about (and +2s for) OpenStack library best practices. However, perhaps
>> pulling it under oslo's umbrella without a rename is blessing it enough.
>>
>> In response to Julien's proposal to make Castellan "the way you can do
>> key management in Python" -- it would be great if Castellan were that
>> abstract, but in practice it is pretty OpenStack-specific. Currently,
>> the Barbican team is great at working on key management projects
>> (including both Barbican and Castellan), but a lot of our focus now is
>> how we can maintain and grow integration with the rest of the OpenStack
>> projects, for which having the name and expertise of oslo would be a
>> great help.
>>
>> Thanks,
>>
>> Kaitlin
>> 
>>_
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
>>openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>-- 
>Davanum Srinivas :: https://twitter.com/dims
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage] vitrage Resource API

2017-03-20 Thread dong.wenjuan
Hi All,




I noticed that the APIs of `resource list` and `resource show`  were mocked.

Is  there any backgroud for the mock or the API is not necessary?




BR,

dwj__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-docs] [tripleo] Creating official Deployment guide for TripleO

2017-03-20 Thread Emilien Macchi
I proposed a blueprint to track the work done:

https://blueprints.launchpad.net/tripleo/+spec/tripleo-deploy-guide
Target: pike-3

Volunteers to work on it with me, please let me know.

Thanks,

On Tue, Mar 14, 2017 at 7:00 AM, Alexandra Settle  wrote:
> Hey Emilien,
>
> You pretty much covered it all! Docs team is happy to provide guidance, but 
> in reality, it should be a fairly straight forward process.
>
> The Kolla team just completed their deploy-guide patches and were able to 
> help refine the process a bit further. Hopefully this should help the TripleO 
> team :)
>
> Reach out if you have any questions at all :)
>
> Thanks,
>
> Alex
>
> On 3/13/17, 10:32 PM, "Emilien Macchi"  wrote:
>
> Team,
>
> [adding Alexandra, OpenStack Docs PTL]
>
> It seems like there is a common interest in pushing deployment guides
> for different OpenStack Deployment projects: OSA, Kolla.
> The landing page is here:
> https://docs.openstack.org/project-deploy-guide/newton/
>
> And one example:
> https://docs.openstack.org/project-deploy-guide/openstack-ansible/newton/
>
> I think this is pretty awesome and it would bring more visibility for
> TripleO project, and help our community to find TripleO documentation
> from a consistent place.
>
> The good news, is that openstack-docs team built a pretty solid
> workflow to make that happen:
> https://docs.openstack.org/contributor-guide/project-deploy-guide.html
> And we don't need to create new repos or do any crazy changes. It
> would probably be some refactoring and sphinx things.
>
> Alexandra, please add any words if I missed something obvious.
>
> Feedback from the team would be welcome here before we engage any work,
>
> Thanks!
> --
> Emilien Macchi
>
>



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [requirements][keystone][glance] WebOb

2017-03-20 Thread Davanum Srinivas
Dear Keystone and Glance teams,

WebOb update of u-c to 1.7.1 is stuck for a while[1]. Can you please
prioritize reviews (keystone) review [1] and (glance) review [2] for
this week?

Thanks,
Dims

[1] https://review.openstack.org/#/c/417591/
[2] https://review.openstack.org/#/c/422234/
[3] https://review.openstack.org/#/c/423366/

-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [telemetry] Moving Gnocchi out

2017-03-20 Thread Julien Danjou
Hi,

After a lot of talk within the Gnocchi team, it appeared to us that
Gnocchi, which has been wisely tagged as 'independent' since its
inception, has a lot of potential usage outside of OpenStack directly.

Being part of the big tent helped the project to be built, but it now
appears that it restrains its adoption and contribution from users
outside of the OpenStack realm.

Therefore, the Gnocchi team has decided to move the project outside of
the OpenStack Big Tent. As a first step, a patch has been submitted to
the governance to delist the project from Telemetry:

  https://review.openstack.org/447438

As a second step, the project will likely move out of the OpenStack
infrastructure in the future.

We expect Gnocchi to continue to thrive and be used by OpenStack, such
as Ceilometer, which Gnocchi is now its primary storage backend. Gnocchi
will also continue to be developed with support for OpenStack base
services.

And if you have any other question, feel free to ask us!

Cheers,
-- 
Julien Danjou
;; Free Software hacker
;; https://julien.danjou.info


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [mogan] Nominating liusheng for Mogan core

2017-03-20 Thread Rui Chen
+1

Liusheng is a responsible reviewer and keep good reviewing quality in Mogan.

Thank you working hard for Mogan, Liusheng.

2017-03-20 16:19 GMT+08:00 Zhenguo Niu :

> Hi team,
>
> I would like to nominate liusheng to Mogan core. Liusheng has been a
> significant code contributor since the project's creation providing high
> quality reviews.
>
> Please feel free to respond in public or private your support or any
> concerns.
>
>
> Thanks,
> Zhenguo
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mogan] Nominating liusheng for Mogan core

2017-03-20 Thread Zhenguo Niu
Hi team,

I would like to nominate liusheng to Mogan core. Liusheng has been a
significant code contributor since the project's creation providing high
quality reviews.

Please feel free to respond in public or private your support or any
concerns.


Thanks,
Zhenguo
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][tap-as-a-service] neutron-lib breakage

2017-03-20 Thread Gary Kotton
Hi,
Can the tap guys please look at https://review.openstack.org/447277. All 
projects using the tap project are currently stuck
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [stackalytics] user correction seems not effective

2017-03-20 Thread Trinath Somanchi
I’m referring to, https://review.openstack.org/#/c/446942/1


/ Trinath Somanchi.


From: Yujun Zhang (ZTE) [mailto:zhangyujun+...@gmail.com]
Sent: Monday, March 20, 2017 11:00 AM
To: OpenStack Development Mailing List (not for usage questions) 
; shak...@gmail.com
Cc: 张玉军 
Subject: Re: [openstack-dev] [stackalytics] user correction seems not effective

Hi, Trinath

What failure are you referring to exactly? It seems all jobs passed normally.

I know there are some error message in the console logs, but that seems to be 
expected failures for negative test cases.

On Mon, Mar 20, 2017 at 12:16 PM Trinath Somanchi 
> wrote:
Jenkins is throwing some Failures in new submissions. That might be an issue.
Get Outlook for iOS


From: Yujun Zhang (ZTE) 
>
Sent: Monday, March 20, 2017 8:47:30 AM
To: shak...@gmail.com
Cc: OpenStack Development Mailing List (not for usage questions); 张玉军
Subject: [openstack-dev] [stackalytics] user correction seems not effective

Hi, Ilya

I submitted a patch for user correction[1] several months ago. It is supposed 
to reset the Email list of user `zhangyujun`. But it seems not effective from 
the response of stackalytics api.

curl http://stackalytics.com/api/1.0/users/zhangyujun


{"user": {"launchpad_id": "zhangyujun", "user_id": "zhangyujun", "seq": 65434, 
"company_link": "EasyStack",
 "text": "Zhang Yujun", "companies": [{"company_name": "ZTE Corporation", 
"end_date": 1474588800}, {"company_name": "EasyStack", "end_date": 0}], "id": 
"zhangyujun", "static": true, "gerrit_id": "zhangyujun", "user_name": "Zhang 
Yujun", "emails": 
["zhangyujun.d...@gmail.com", 
"zhangyu...@gmail.com", 
"284517...@qq.com", 
"zhangyujun+...@gmail.com", 
"yujun.zh...@easystack.cn", 
"zhang.yuj...@zte.com.cn"]}}

The email address in red should be removed if the patch works as expected.

Is there any way I can make further investigation on this issue? The log 
message from the stackalytics server might be helpful.


[1]: https://review.openstack.org/#/c/426502/

--
Yujun Zhang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
--
Yujun Zhang
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev