[openstack-dev] [requirements] Cancelling today's (2017-03-22) meeting

2017-03-21 Thread Tony Breeds
Hi All,
Sorry for the short notice but as several cores are unavailable for the
meeting today I'm going to cancel it.  We'll try again next week.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tricircle]weekly meeting of Mar. 22

2017-03-21 Thread joehuang
Hello, team,

Agenda of Mar.22 weekly meeting:

  1.  Pike-1 patches review
  2.  Pike-1 release
  3.  Demo and talk of VNF high availability across OpenStack with Tricircle in 
OPNFV Beijing summit
  4.  Open Discussion

How to join:
#  IRC meeting: https://webchat.freenode.net/?channels=openstack-meeting on 
every Wednesday starting from UTC 14:00.



Best Regards
Chaoyi Huang (joehuang)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl] Action required ! - Please submit Boston Forum sessions before April 2nd

2017-03-21 Thread joehuang
Hello,

Should we submit a session for the on-boarding slot which is being arranged by 
Kendall " first come first served process" ? Does this mean that the 
on-boarding slot allocation need another round of selection, not the " first 
come first served process" ?

Best Regards
Chaoyi Huang (joehuang)


From: Emilien Macchi [emil...@redhat.com]
Sent: 22 March 2017 0:40
To: OpenStack Development Mailing List
Subject: [openstack-dev] [all][ptl] Action required ! - Please submit Boston 
Forum sessions before April 2nd

Sorry for duplicating the original e-mail from User Committee, but we
want to make sure all projects are aware about the deadline.

http://lists.openstack.org/pipermail/user-committee/2017-March/001856.html

PTLs (and everyone), please make sure topics are submitted before April 2nd.
Please let us know any question,

Thanks!
--
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment][forum] proposing a session about future of configuration management - ops + devs wanted!

2017-03-21 Thread Stephen Hindle
Unfortunately, I won't be in boston...
but I'm very interested in the topic as I have to design 'brownfield'
openstack deployments and operations runbooks.

On Tue, Mar 21, 2017 at 3:23 PM, Emilien Macchi  wrote:
> OpenStack developers and operators who work on deployments: we need you.
>
> http://forumtopics.openstack.org/cfp/details/15
>
> Abstract: I would like to bring Developers and Operators in a room to
> discuss about future of Configuration Management in OpenStack.
>
> Until now, we haven't done a good job in collaborating on how we make
> configuration management in a consistent way across OpenStack
> Deployment Tools.
> Some efforts started to emerge in Pike:
> https://etherpad.openstack.org/p/deployment-pike
> And some projects like TripleO started some discussion on future of
> configuration management:
> https://etherpad.openstack.org/p/tripleo-etcd-transition
>
> In this discussion, we will discuss about our common challenges and
> take some actions from there, where projects could collaborate.
>
> Desired people:
> - Folks from Deployment Tools (TripleO, Kolla, OSA, Kubernetes, etc)
> - Operators who deploy OpenStack
>
> Moderator: me + any volunteer.
>
> Any question on this proposal is very welcome by using this thread.
>
> Thanks for reading so far and I'm looking forward to making progress
> on this topic in Boston.
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Stephen Hindle - Senior Systems Engineer
480.807.8189 480.807.8189
www.limelight.com Delivering Faster Better

Join the conversation

at Limelight Connect

-- 
The information in this message may be confidential.  It is intended solely 
for
the addressee(s).  If you are not the intended recipient, any disclosure,
copying or distribution of the message, or any action or omission taken by 
you
in reliance on it, is prohibited and may be unlawful.  Please immediately
contact the sender if you have received this message in error.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] container jobs are unstable

2017-03-21 Thread Emilien Macchi
Hey,

I've noticed that container jobs look pretty unstable lately; to me,
it sounds like a timeout:
http://logs.openstack.org/19/447319/2/check-tripleo/gate-tripleo-ci-centos-7-ovb-containers-oooq-nv/bca496a/console.html#_2017-03-22_00_08_55_358973

If anyone could file a bug and see how we can bring it back as soon as
possible, I think we want to maintain this job in stable shape.
I remember Container squad wanted it voting because it was supposed to
be stable, but I'm not sure that's the case today.

Also, it would be great to have the container jobs in
http://tripleo.org/cistatus.html - what do you think?

Thanks for your help,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] Proposing duonghq for core

2017-03-21 Thread duon...@vn.fujitsu.com
Dear Kollaish, 

Thank you for giving me the opportunity to be part of core-reviewer team. I 
will do my best.


Regards,

duonghq


> -Original Message-
> From: Michał Jastrzębski [mailto:inc...@gmail.com]
> Sent: Tuesday, March 21, 2017 10:23 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [kolla] Proposing duonghq for core
> 
> And time is up:) Welcome Duong to core team!
> 
> On 16 March 2017 at 10:32, Dave Walker  wrote:
> > +1, some great contributions.  Looking forward to having Duong on the
> team.
> >
> > --
> > Kind Regards,
> > Dave Walker
> >
> > On 15 March 2017 at 19:52, Vikram Hosakote (vhosakot)
> > 
> > wrote:
> >>
> >> +1  Great job Duong!
> >>
> >>
> >>
> >> Regards,
> >>
> >> Vikram Hosakote
> >>
> >> IRC:  vhosakot
> >>
> >>
> >>
> >> From: Michał Jastrzębski 
> >> Reply-To: "OpenStack Development Mailing List (not for usage
> questions)"
> >> 
> >> Date: Wednesday, March 08, 2017 at 11:21 PM
> >> To: "OpenStack Development Mailing List (not for usage questions)"
> >> 
> >> Subject: [openstack-dev] [kolla] Proposing duonghq for core
> >>
> >>
> >>
> >> Hello,
> >>
> >>
> >>
> >> I'd like to start voting to include Duong (duonghq) in Kolla and
> >>
> >> Kolla-ansible core teams. Voting will be open for 2 weeks (ends at
> >>
> >> 21st of March).
> >>
> >>
> >>
> >> Consider this my +1 vote.
> >>
> >>
> >>
> >> Cheers,
> >>
> >> Michal
> >>
> >>
> __
> ___
> >> _
> >>
> >> OpenStack Development Mailing List (not for usage questions)
> >>
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >>
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>
> __
> ___
> >> _ OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >
> >
> >
> __
> 
> >  OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> 
> __
> 
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer]Can't find meter anywhere with ceilometer post REST API

2017-03-21 Thread Hui Xiang
Yurii,

   Thanks, with the config listed above, it works now.

Hui.

On Tue, Mar 21, 2017 at 2:50 PM, Yurii Prokulevych 
wrote:

> Pipeline's config looks good. Could U please enable debug/verbose in
> ceilometer.conf and check ceilometer/collector.log ?
>
> ---
> Yurii
>
> On Tue, 2017-03-21 at 11:40 +0800, Hui Xiang wrote:
> > Thanks gordon for your info.
> >
> > The reason why not using gnocchi in mitaka is that we are using
> > collectd-ceilometer-plugin[1] to posting samples to ceilometer
> > through ceilometer-api, after mitaka yes we will all move to gnocchi.
> >
> >
> > """
> > when posting samples to ceilometer-api, the data goes through
> > pipeline before being stored. therefore, you need notification-agent
> > enabled AND you need to make sure the pipeline.yaml accepts the
> > meter.
> > """
> > As the samples posted doesn't have event_type, so I guess you mean I
> > don't need to edit the event_pipeline.yaml, but need to edit the
> > pipeline.yaml to accepts the meter. Could you kindly check whether
> > below simple example make sense to accept the meter?  Does the source
> > name need to match the source field in the sample or it can be
> > defined as anyone.
> >
> > > [{"counter_name": "interface.if_errors",
> > >   "user_id": "5457b977c25e4498a31a3c1c78829631",
> > >   "resource_id": "localhost-ovs-system",
> > >   "timestamp": "2017-03-17T02:26:46",
> > >   "resource_metadata": {},
> > >   "source": "5b1525a8eb2d4739a83b296682aed023:collectd",
> > >   "counter_unit": "Errors/s",
> > >   "counter_volume": 0.0,
> > >   "project_id": "5b1525a8eb2d4739a83b296682aed023",
> > >   "message_id": "2b4ce294-0ab9-11e7-8058-026ea687824d",
> > >   "counter_type": "delta"},
> > >
> >
> >
> > sources:
> > - name: meter_source
> >   interval: 60
> >   meters:
> >   - "interface.if_errors"
> >   sinks:
> >   - meter_sink
> >
> > sinks:
> > - name: meter_sink
> >   transformers:
> >   publishers:
> >   - notifier://
> >
> >
> > Does the source name need to matching the source field in the sample
> > or it can be defined as any.
> >
> > [1]. https://github.com/openstack/collectd-ceilometer-plugin
> >
> >
> > Thanks.
> > Hui.
> >
> >
> > On Tue, Mar 21, 2017 at 4:21 AM, gordon chung  wrote:
> > >
> > >
> > > On 18/03/17 04:54 AM, Hui Xiang wrote:
> > > > Hi folks,
> > > >
> > > >   I am trying to post samples from third part software to
> > > ceilometer via
> > > > the REST API as below with Mitaka version. I can see ceilometer-
> > > api has
> > > > received this post, and seems forwarded to ceilometer
> > > notification agent
> > > > through RMQ.
> > > >
> > >
> > > first and most importantly, the ceilometer-api is deprecated and
> > > not
> > > supported upstream anymore. please use gnocchi for proper time
> > > series
> > > storage (or whatever storage solution you feel comfortable with)
> > >
> > > >
> > > > 2. LOG
> > > > 56:17] "*POST /v2/meters/interface.if_packets HTTP/1.1*" 201 -
> > > > 2017-03-17 16:56:17.378 52955 DEBUG
> > > oslo_messaging._drivers.amqpdriver
> > > > [req-1c4ea84d-ea53-4518-81ea-6c0bffa9745d
> > > > 5457b977c25e4498a31a3c1c78829631 5b1525a8eb2d4739a83b296682aed023
> > > - - -]
> > > > CAST unique_id: 64a6bae3bbcc4b7dab4dceb13cf7f81b NOTIFY exchange
> > > > 'ceilometer' topic 'notifications.sample' _send
> > > > /usr/lib/python2.7/site-
> > > packages/oslo_messaging/_drivers/amqpdriver.py:438
> > > > 2017-03-17 16:56:17.382 52955 INFO werkzeug
> > > > [req-1c4ea84d-ea53-4518-81ea-6c0bffa9745d
> > > > 5457b977c25e4498a31a3c1c78829631 5b1525a8eb2d4739a83b296682aed023
> > > - - -]
> > > > 192.168.0.3 - - [17/Mar/2017
> > > >
> > > >
> > > > 3. REST API return result
> > > > [{"counter_name": "interface.if_errors",
> > > >   "user_id": "5457b977c25e4498a31a3c1c78829631",
> > > >   "resource_id": "localhost-ovs-system",
> > > >   "timestamp": "2017-03-17T02:26:46",
> > > >   "resource_metadata": {},
> > > >   "source": "5b1525a8eb2d4739a83b296682aed023:collectd",
> > > >   "counter_unit": "Errors/s",
> > > >   "counter_volume": 0.0,
> > > >   "project_id": "5b1525a8eb2d4739a83b296682aed023",
> > > >   "message_id": "2b4ce294-0ab9-11e7-8058-026ea687824d",
> > > >   "counter_type": "delta"},
> > > >
> > >
> > > when posting samples to ceilometer-api, the data goes through
> > > pipeline
> > > before being stored. therefore, you need notification-agent enabled
> > > AND
> > > you need to make sure the pipeline.yaml accepts the meter.
> > >
> > > --
> > > gord
> > >
> > > ___
> > > ___
> > > OpenStack Development Mailing List (not for usage questions)
> > > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsu
> > > bscribe
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> >
> > _
> > _
> > OpenStack Development Mailing 

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-21 Thread Qiming Teng
On Tue, Mar 21, 2017 at 10:50:13AM -0400, Jay Pipes wrote:
> On 03/20/2017 09:24 PM, Qiming Teng wrote:
> >On Mon, Mar 20, 2017 at 03:35:18PM -0400, Jay Pipes wrote:
> >>On 03/20/2017 03:08 PM, Adrian Otto wrote:
> >>>Team,
> >>>
> >>>Stephen Watson has been working on an magnum feature to add magnum 
> >>>commands to the openstack client by implementing a plugin:
> >>>
> >>>https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclient+osc
> >>>
> >>>In review of this work, a question has resurfaced, as to what the client 
> >>>command name should be for magnum related commands. Naturally, we’d like 
> >>>to have the name “cluster” but that word is already in use by Senlin.
> >>
> >>Unfortunately, the Senlin API uses a whole bunch of generic terms as
> >>top-level REST resources, including "cluster", "event", "action",
> >>"profile", "policy", and "node". :( I've warned before that use of
> >>these generic terms in OpenStack APIs without a central group
> >>responsible for curating the API would lead to problems like this.
> >>This is why, IMHO, we need the API working group to be ultimately
> >>responsible for preventing this type of thing from happening.
> >>Otherwise, there ends up being a whole bunch of duplication and same
> >>terms being used for entirely different things.
> >>
> >
> >Well, I believe the name and namespaces used by Senlin is very clean.
> 
> Note that above I referred to the Senlin *API*:
> 
> https://developer.openstack.org/api-ref/clustering/
> 
> The use of generic terms like "cluster", "node", "policy",
> "profile", "action", and "event" as *top-level resources in the REST
> API* are what I was warning about.
> 
> >Please see the following outputs. All commands are contained in the
> >cluster namespace to avoid any conflicts with any other projects.
> 
> Right, but I was talking about the REST API.
> 
> >On the other hand, is there any document stating that Magnum is about
> >providing clustering service?
> 
> What exactly is a clustering service?
> 
> I mean, Galera has a clustering service. Pacemaker has a clustering
> service. k8s has a clustering service. etcd has a clustering
> service. Zookeeper has a clustering service.
> 
> Senlin is an API that allows a user to group *virtual machines*
> together and expand or shrink that group of VMs. It's basically the
> old Heat autoscaling API done properly. There's a *lot* to like
> about Senlin's API and implementation.

Okay, I see where the confusion comes from. Senlin is designed to be a
*generic clustering service* that can create and manage whatever
resource types. It can create VM groups and manage the scaling of such
groups properly. It can provide VM HA based on the resource redundancy.
It models load-balancing support into a policy that can be attached to
and detached from a VM cluster.

Senlin manages "nodes" created from a "profile". A VM instance is only
one of the profile types supported. Senlin also supports clusters of
Heat stacks, clusters of docker containers today. There are also efforts
on managing bare-metal servers. 

The team also uses "resource pools" and "clusters" interchangeably,
because that IS what the service is about. Calling Senlin a resource
pool service may be more confusing, right?

- Qiming

> However, it would have been more appropriate (and forward-looking)
> to call Senlin's namespace "instance group" or "server group" than
> the generic term "cluster".
> 
> >  Why Magnum cares so much about the top
> >level noun if it is not its business?
> 
> Because Magnum uses the term "cluster" as a top-level resource in
> its own REST API:
> 
> http://git.openstack.org/cgit/openstack/magnum/tree/magnum/api/controllers/v1/cluster.py
> 
> The generic term "cluster" that Magnum uses should really be called
> "coe group" or "container engine group" or "container service group"
> or something like that, to better indicate what exactly is being
> operated on.
> 
> Best,
> -jay
> 
> >$ openstack --help | grep cluster
> >
> >  --os-clustering-api-version 
> >
> >  cluster action list  List actions.
> >  cluster action show  Show detailed info about the specified action.
> >  cluster build info  Retrieve build information.
> >  cluster check  Check the cluster(s).
> >  cluster collect  Collect attributes across a cluster.
> >  cluster create  Create the cluster.
> >  cluster delete  Delete the cluster(s).
> >  cluster event list  List events.
> >  cluster event show  Describe the event.
> >  cluster expand  Scale out a cluster by the specified number of nodes.
> >  cluster list   List the user's clusters.
> >  cluster members add  Add specified nodes to cluster.
> >  cluster members del  Delete specified nodes from cluster.
> >  cluster members list  List nodes from cluster.
> >  cluster members replace  Replace the nodes in a cluster with
> >  specified nodes.
> >  cluster node check  Check the node(s).
> >  cluster node create  Create the node.
> >  cluster node delete  Delete the 

Re: [openstack-dev] [tripleo] propose Alex Schultz core on tripleo-heat-templates

2017-03-21 Thread Emilien Macchi
On Mon, Mar 13, 2017 at 2:26 PM, John Trowbridge  wrote:
>
>
> On 03/13/2017 10:30 AM, Emilien Macchi wrote:
>> Hi,
>>
>> Alex is already core on instack-undercloud and puppet-tripleo.
>
> +1 it is actually a bit odd to be +2 on puppet-tripleo without being +2
> on THT, since so many changes span the two repos.
>
>

Positive votes and no negative feedback, welcome to THT core!

Thanks Alex for your hard work,
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [craton] Nomination of Thomas Maddox as Craton core

2017-03-21 Thread Ian Cordasco
+1. Welcome to the team, Thomas

On Mar 21, 2017 3:43 PM, "Jim Baker"  wrote:

> *I nominate Thomas Maddox as a core reviewer for the Craton project.*
>
> Thomas has shown extensive knowledge of Craton, working across a range of
> issues in the core service, including down to the database modeling; the
> client; and corresponding bugs, blueprints, and specs. Perhaps most notably
> he has contributed a number of end-to-end patches, such as his work with
> project support.
> https://review.openstack.org/#/q/owner:thomas.maddox
>
> He has also expertly helped across a range of reviews, while always being
> amazingly positive with other team members and potential contributors:
> https://review.openstack.org/#/q/reviewer:thomas.maddox
>
> Other details can be found here on his contributions:
> http://stackalytics.com/report/users/thomas-maddox
>
> In my opinion, Thomas has proven that he will make a fantastic addition to
> the core review team. In particular, I'm confident Thomas will help further
> improve the velocity for our project as a whole as a core reviewer. I hope
> others concur with me in this assessment!
>
> - Jim
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-21 Thread Alex Schultz
On Tue, Mar 21, 2017 at 5:35 PM, John Dickinson  wrote:
>
>
> On 21 Mar 2017, at 15:34, Alex Schultz wrote:
>
>> On Tue, Mar 21, 2017 at 3:45 PM, John Dickinson  wrote:
>>> I've been following this thread, but I must admit I seem to have missed 
>>> something.
>>>
>>> What problem is being solved by storing per-server service configuration 
>>> options in an external distributed CP system that is currently not possible 
>>> with the existing pattern of using local text files?
>>>
>>
>> This effort is partially to help the path to containerization where we
>> are delivering the service code via container but don't want to
>> necessarily deliver the configuration in the same fashion.  It's about
>> ease of configuration where moving service -> config files (on many
>> hosts/containers) to service -> config via etcd (single source
>> cluster).  It's also about an alternative to configuration management
>> where today we have many tools handling the files in various ways
>> (templates, from repo, via code providers) and trying to come to a
>> more unified way of representing the configuration such that the end
>> result is the same for every deployment tool.  All tools load configs
>> into $place and services can be configured to talk to $place.  It
>> should be noted that configuration files won't go away because many of
>> the companion services still rely on them (rabbit/mysql/apache/etc) so
>> we're really talking about services that currently use oslo.
>
> Thanks for the explanation!
>
> So in the future, you expect a node in a clustered OpenStack service to be 
> deployed and run as a container, and then that node queries a centralized 
> etcd (or other) k/v store to load config options. And other services running 
> in the (container? cluster?) will load config from local text files managed 
> in some other way.

No the goal is in the etcd mode, that it  may not be necessary to load
the config files locally at all.  That being said there would still be
support for having some configuration from a file and optionally
provide a kv store as another config point.  'service --config-file
/etc/service/service.conf --config-etcd proto://ip:port/slug'

>
> No wait. It's not the *services* that will load the config from a kv 
> store--it's the config management system? So in the process of deploying a 
> new container instance of a particular service, the deployment tool will pull 
> the right values out of the kv system and inject those into the container, 
> I'm guessing as a local text file that the service loads as normal?
>

No the thought is to have the services pull their configs from the kv
store via oslo.config.  The point is hopefully to not require
configuration files at all for containers.  The container would get
where to pull it's configs from (ie. http://11.1.1.1:2730/magic/ or
/etc/myconfigs/).  At that point it just becomes another place to load
configurations from via oslo.config.  Configuration management comes
in as a way to load the configs either as a file or into etcd.  Many
operators (and deployment tools) are already using some form of
configuration management so if we can integrate in a kv store output
option, adoption becomes much easier than making everyone start from
scratch.

> This means you could have some (OpenStack?) service for inventory management 
> (like Karbor) that is seeding the kv store, the cloud infrastructure software 
> itself is "cloud aware" and queries the central distributed kv system for the 
> correct-right-now config options, and the cloud service itself gets all the 
> benefits of dynamic scaling of available hardware resources. That's pretty 
> cool. Add hardware to the inventory, the cloud infra itself expands to make 
> it available. Hardware fails, and the cloud infra resizes to adjust. Apps 
> running on the infra keep doing their thing consuming the resources. It's 
> clouds all the way down :-)
>
> Despite sounding pretty interesting, it also sounds like a lot of extra 
> complexity. Maybe it's worth it. I don't know.
>

Yea there's extra complexity at least in the
deployment/management/monitoring of the new service or maybe not.
Keeping configuration files synced across 1000s of nodes (or
containers) can be just as hard however.

> Thanks again for the explanation.
>
>
> --John
>
>
>
>
>>
>> Thanks,
>> -Alex
>>
>>>
>>> --John
>>>
>>>
>>>
>>>
>>> On 21 Mar 2017, at 14:26, Davanum Srinivas wrote:
>>>
 Jay,

 the /v3alpha HTTP API  (grpc-gateway) supports watch
 https://coreos.com/etcd/docs/latest/dev-guide/apispec/swagger/rpc.swagger.json

 -- Dims

 On Tue, Mar 21, 2017 at 5:22 PM, Jay Pipes  wrote:
> On 03/21/2017 04:29 PM, Clint Byrum wrote:
>>
>> Excerpts from Doug Hellmann's message of 2017-03-15 15:35:13 -0400:
>>>
>>> Excerpts from Thomas Herve's message of 2017-03-15 09:41:16 +0100:

 On Wed, Mar 15, 2017 at 12:05 AM, Joshua Harlow 

Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-21 Thread John Dickinson


On 21 Mar 2017, at 15:34, Alex Schultz wrote:

> On Tue, Mar 21, 2017 at 3:45 PM, John Dickinson  wrote:
>> I've been following this thread, but I must admit I seem to have missed 
>> something.
>>
>> What problem is being solved by storing per-server service configuration 
>> options in an external distributed CP system that is currently not possible 
>> with the existing pattern of using local text files?
>>
>
> This effort is partially to help the path to containerization where we
> are delivering the service code via container but don't want to
> necessarily deliver the configuration in the same fashion.  It's about
> ease of configuration where moving service -> config files (on many
> hosts/containers) to service -> config via etcd (single source
> cluster).  It's also about an alternative to configuration management
> where today we have many tools handling the files in various ways
> (templates, from repo, via code providers) and trying to come to a
> more unified way of representing the configuration such that the end
> result is the same for every deployment tool.  All tools load configs
> into $place and services can be configured to talk to $place.  It
> should be noted that configuration files won't go away because many of
> the companion services still rely on them (rabbit/mysql/apache/etc) so
> we're really talking about services that currently use oslo.

Thanks for the explanation!

So in the future, you expect a node in a clustered OpenStack service to be 
deployed and run as a container, and then that node queries a centralized etcd 
(or other) k/v store to load config options. And other services running in the 
(container? cluster?) will load config from local text files managed in some 
other way.

No wait. It's not the *services* that will load the config from a kv 
store--it's the config management system? So in the process of deploying a new 
container instance of a particular service, the deployment tool will pull the 
right values out of the kv system and inject those into the container, I'm 
guessing as a local text file that the service loads as normal?

This means you could have some (OpenStack?) service for inventory management 
(like Karbor) that is seeding the kv store, the cloud infrastructure software 
itself is "cloud aware" and queries the central distributed kv system for the 
correct-right-now config options, and the cloud service itself gets all the 
benefits of dynamic scaling of available hardware resources. That's pretty 
cool. Add hardware to the inventory, the cloud infra itself expands to make it 
available. Hardware fails, and the cloud infra resizes to adjust. Apps running 
on the infra keep doing their thing consuming the resources. It's clouds all 
the way down :-)

Despite sounding pretty interesting, it also sounds like a lot of extra 
complexity. Maybe it's worth it. I don't know.

Thanks again for the explanation.


--John




>
> Thanks,
> -Alex
>
>>
>> --John
>>
>>
>>
>>
>> On 21 Mar 2017, at 14:26, Davanum Srinivas wrote:
>>
>>> Jay,
>>>
>>> the /v3alpha HTTP API  (grpc-gateway) supports watch
>>> https://coreos.com/etcd/docs/latest/dev-guide/apispec/swagger/rpc.swagger.json
>>>
>>> -- Dims
>>>
>>> On Tue, Mar 21, 2017 at 5:22 PM, Jay Pipes  wrote:
 On 03/21/2017 04:29 PM, Clint Byrum wrote:
>
> Excerpts from Doug Hellmann's message of 2017-03-15 15:35:13 -0400:
>>
>> Excerpts from Thomas Herve's message of 2017-03-15 09:41:16 +0100:
>>>
>>> On Wed, Mar 15, 2017 at 12:05 AM, Joshua Harlow 
>>> wrote:
>>>
 * How does reloading work (does it)?
>>>
>>>
>>> No. There is nothing that we can do in oslo that will make services
>>> magically reload configuration. It's also unclear to me if that's
>>> something to do. In a containerized environment, wouldn't it be
>>> simpler to deploy new services? Otherwise, supporting signal based
>>> reload as we do today should be trivial.
>>
>>
>> Reloading works today with files, that's why the question is important
>> to think through. There is a special flag to set on options that are
>> "mutable" and then there are functions within oslo.config to reload.
>> Those are usually triggered when a service gets a SIGHUP or something
>> similar.
>>
>> We need to decide what happens to a service's config when that API
>> is used and the backend is etcd. Maybe nothing, because every time
>> any config option is accessed the read goes all the way through to
>> etcd? Maybe a warning is logged because we don't support reloads?
>> Maybe an error is logged? Or maybe we flush the local cache and start
>> reading from etcd on future accesses?
>>
>
> etcd provides the ability to "watch" keys. So one would start a thread
> that just watches the keys you want to reload on, and when they change
> that thread will see a response and can 

Re: [openstack-dev] [OpenStack-Infra] [infra][security] Encryption in Zuul v3

2017-03-21 Thread James E. Blair
David Moreau Simard  writes:

> I don't have a horse in this race or a strong opinion on the topic, in
> fact I'm admittedly not very knowledgeable when it comes to low-level
> encryption things.
>
> However, I did have a question, even if just to generate discussion.
> Did we ever consider simply leaving secrets out of Zuul and offloading
> that "burden" to something else ?
>
> For example, end-users could use something like git-crypt [1] to crypt
> files in their git repos and Zuul could have a mean to decrypt them at
> runtime.
> There is also ansible-vault [2] that could perhaps be leveraged.
>
> Just trying to make sure we're not re-inventing any wheels,
> implementing crypto is usually not straightfoward.

We did talk about some other options, though unfortunately it doesn't
look like a lot of that made it into the spec reviews.  Among them, it's
probably worth noting that there's nothing preventing a Zuul deployment
from relying on some third-party secret system -- if you can use it with
Ansible, you should be able to use it with Zuul.  But we also want Zuul
to have these features out of the box, and, wearing our sysadmin hits,
we're really keen on having source control and code review for the
system secrets for the OpenStack project.

Vault alone doesn't meet our requirements here because it relies on
symmetric encryption, which means we need users to share a key with
Zuul, implying an extra service with out-of-band authn/authz.  However,
we *could* use our PKCS#1 style system to share a vault key with Zuul.
I don't think that has come up as a suggestion yet, but seems like it
would work.

Git-crypt in GPG mode, at first glance, looks like it could work fairly
well for this.  It encrypts entire files, so we would have to rework how
secrets are stored (we encrypt blobs within plaintext files) and add
another file to the list of zuul config files (e.g., .zuul.yaml.gpg).
But aside from that, I think it could work and may be worth further
exploration.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Release Naming for R - it's that time again!

2017-03-21 Thread Monty Taylor
Hey everybody,

It's your favorite time of the year - it's time for us to pick a name
for our "R" release.

Since the associated Summit will be in Vancouver, the Geographic
Location has been chosen as "British Columbia".

Nominations are now open. Please add suitable names to
https://wiki.openstack.org/wiki/Release_Naming/R_Proposals between now
and 2017-03-29 23:59:59 UTC.

In case you don't remember the rules:

* Each release name must start with the letter of the ISO basic Latin
alphabet following the initial letter of the previous release, starting
with the initial release of "Austin". After "Z", the next name should
start with "A" again.

* The name must be composed only of the 26 characters of the ISO basic
Latin alphabet. Names which can be transliterated into this character
set are also acceptable.

* The name must refer to the physical or human geography of the region
encompassing the location of the OpenStack design summit for the
corresponding release. The exact boundaries of the geographic region
under consideration must be declared before the opening of nominations,
as part of the initiation of the selection process.

* The name must be a single word with a maximum of 10 characters. Words
that describe the feature should not be included, so "Foo City" or "Foo
Peak" would both be eligible as "Foo".

Names which do not meet these criteria but otherwise sound really cool
should be added to a separate section of the wiki page and the TC may
make an exception for one or more of them to be considered in the
Condorcet poll. The naming official is responsible for presenting the
list of exceptional names for consideration to the TC before the poll opens.

Let the naming begin.

Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-21 Thread Alex Schultz
On Tue, Mar 21, 2017 at 3:45 PM, John Dickinson  wrote:
> I've been following this thread, but I must admit I seem to have missed 
> something.
>
> What problem is being solved by storing per-server service configuration 
> options in an external distributed CP system that is currently not possible 
> with the existing pattern of using local text files?
>

This effort is partially to help the path to containerization where we
are delivering the service code via container but don't want to
necessarily deliver the configuration in the same fashion.  It's about
ease of configuration where moving service -> config files (on many
hosts/containers) to service -> config via etcd (single source
cluster).  It's also about an alternative to configuration management
where today we have many tools handling the files in various ways
(templates, from repo, via code providers) and trying to come to a
more unified way of representing the configuration such that the end
result is the same for every deployment tool.  All tools load configs
into $place and services can be configured to talk to $place.  It
should be noted that configuration files won't go away because many of
the companion services still rely on them (rabbit/mysql/apache/etc) so
we're really talking about services that currently use oslo.

Thanks,
-Alex

>
> --John
>
>
>
>
> On 21 Mar 2017, at 14:26, Davanum Srinivas wrote:
>
>> Jay,
>>
>> the /v3alpha HTTP API  (grpc-gateway) supports watch
>> https://coreos.com/etcd/docs/latest/dev-guide/apispec/swagger/rpc.swagger.json
>>
>> -- Dims
>>
>> On Tue, Mar 21, 2017 at 5:22 PM, Jay Pipes  wrote:
>>> On 03/21/2017 04:29 PM, Clint Byrum wrote:

 Excerpts from Doug Hellmann's message of 2017-03-15 15:35:13 -0400:
>
> Excerpts from Thomas Herve's message of 2017-03-15 09:41:16 +0100:
>>
>> On Wed, Mar 15, 2017 at 12:05 AM, Joshua Harlow 
>> wrote:
>>
>>> * How does reloading work (does it)?
>>
>>
>> No. There is nothing that we can do in oslo that will make services
>> magically reload configuration. It's also unclear to me if that's
>> something to do. In a containerized environment, wouldn't it be
>> simpler to deploy new services? Otherwise, supporting signal based
>> reload as we do today should be trivial.
>
>
> Reloading works today with files, that's why the question is important
> to think through. There is a special flag to set on options that are
> "mutable" and then there are functions within oslo.config to reload.
> Those are usually triggered when a service gets a SIGHUP or something
> similar.
>
> We need to decide what happens to a service's config when that API
> is used and the backend is etcd. Maybe nothing, because every time
> any config option is accessed the read goes all the way through to
> etcd? Maybe a warning is logged because we don't support reloads?
> Maybe an error is logged? Or maybe we flush the local cache and start
> reading from etcd on future accesses?
>

 etcd provides the ability to "watch" keys. So one would start a thread
 that just watches the keys you want to reload on, and when they change
 that thread will see a response and can reload appropriately.

 https://coreos.com/etcd/docs/latest/dev-guide/api_reference_v3.html
>>>
>>>
>>> Yep. Unfortunately, you won't be able to start an eventlet greenthread to
>>> watch an etcd3/gRPC key. The python grpc library is incompatible with
>>> eventlet/gevent's monkeypatching technique and causes a complete program
>>> hang if you try to communicate with the etcd3 server from a greenlet. Fun!
>>>
>>> So, either use etcd2 (the no-longer-being-worked-on HTTP API) or don't use
>>> eventlet in your client service.
>>>
>>> Best,
>>> -jay
>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>> --
>> Davanum Srinivas :: https://twitter.com/dims
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

[openstack-dev] [deployment][forum] proposing a session about future of configuration management - ops + devs wanted!

2017-03-21 Thread Emilien Macchi
OpenStack developers and operators who work on deployments: we need you.

http://forumtopics.openstack.org/cfp/details/15

Abstract: I would like to bring Developers and Operators in a room to
discuss about future of Configuration Management in OpenStack.

Until now, we haven't done a good job in collaborating on how we make
configuration management in a consistent way across OpenStack
Deployment Tools.
Some efforts started to emerge in Pike:
https://etherpad.openstack.org/p/deployment-pike
And some projects like TripleO started some discussion on future of
configuration management:
https://etherpad.openstack.org/p/tripleo-etcd-transition

In this discussion, we will discuss about our common challenges and
take some actions from there, where projects could collaborate.

Desired people:
- Folks from Deployment Tools (TripleO, Kolla, OSA, Kubernetes, etc)
- Operators who deploy OpenStack

Moderator: me + any volunteer.

Any question on this proposal is very welcome by using this thread.

Thanks for reading so far and I'm looking forward to making progress
on this topic in Boston.
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic] Translations removal

2017-03-21 Thread Taryma, Joanna
Hi team,

As discussed on Monday, logged messages shouldn’t be translated anymore. 
Exception messages still should be still translated.
While removing usages of _LE, _LW, _LI should be fairly easy, some usages of _ 
may cause issues.

Some messages in the code are declared with ‘_’ method and used both for logger 
and exception. This has to be changed, so we don’t have some log entries 
translated because of that.
The best option in terms of code redundancy would be something like:
msg = “”
LOG.error(msg, {: })
raise Exception(_(msg) % {: })

However, pep8 does not accept passing variable to translation functions,  so 
this results in ‘H701 Empty localization string’ error.
Possible options to handle that:

1)   Duplicate messages:

LOG.error(“”, {: })

raise Exception(_(“”) % {: })

2)   Ignore this error

3)   Talk to hacking people about possible upgrade of this check

4)   Pass translated text to LOG in such cases

I’d personally vote for 2. What are your thoughts?

Kind regards,
Joanna

[0] 
http://eavesdrop.openstack.org/irclogs/%23openstack-ironic/%23openstack-ironic.2017-03-21.log.html#t2017-03-21T14:00:49
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-21 Thread Fei Long Wang
As far as I know, most of Zaqar team members won't be in Boston. But I
will be there, so pls help put Zaqar on the list if there is one
available. Thanks.


On 16/03/17 07:20, Kendall Nelson wrote:
> Hello All!
>
> As you may have seen in a previous thread [1] the Forum will offer
> project on-boarding rooms! This idea is that these rooms will provide
> a place for new contributors to a given project to find out more about
> the project, people, and code base. The slots will be spread out
> throughout the whole Summit and will be 90 min long.
>
> We have a very limited slots available for interested projects so it
> will be a first come first served process. Let me know if you are
> interested and I will reserve a slot for you if there are spots left.
>
> - Kendall Nelson (diablo_rojo)
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2017-March/113459.html
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 
Cheers & Best regards,
Feilong Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-- 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-21 Thread John Dickinson
I've been following this thread, but I must admit I seem to have missed 
something.

What problem is being solved by storing per-server service configuration 
options in an external distributed CP system that is currently not possible 
with the existing pattern of using local text files?


--John




On 21 Mar 2017, at 14:26, Davanum Srinivas wrote:

> Jay,
>
> the /v3alpha HTTP API  (grpc-gateway) supports watch
> https://coreos.com/etcd/docs/latest/dev-guide/apispec/swagger/rpc.swagger.json
>
> -- Dims
>
> On Tue, Mar 21, 2017 at 5:22 PM, Jay Pipes  wrote:
>> On 03/21/2017 04:29 PM, Clint Byrum wrote:
>>>
>>> Excerpts from Doug Hellmann's message of 2017-03-15 15:35:13 -0400:

 Excerpts from Thomas Herve's message of 2017-03-15 09:41:16 +0100:
>
> On Wed, Mar 15, 2017 at 12:05 AM, Joshua Harlow 
> wrote:
>
>> * How does reloading work (does it)?
>
>
> No. There is nothing that we can do in oslo that will make services
> magically reload configuration. It's also unclear to me if that's
> something to do. In a containerized environment, wouldn't it be
> simpler to deploy new services? Otherwise, supporting signal based
> reload as we do today should be trivial.


 Reloading works today with files, that's why the question is important
 to think through. There is a special flag to set on options that are
 "mutable" and then there are functions within oslo.config to reload.
 Those are usually triggered when a service gets a SIGHUP or something
 similar.

 We need to decide what happens to a service's config when that API
 is used and the backend is etcd. Maybe nothing, because every time
 any config option is accessed the read goes all the way through to
 etcd? Maybe a warning is logged because we don't support reloads?
 Maybe an error is logged? Or maybe we flush the local cache and start
 reading from etcd on future accesses?

>>>
>>> etcd provides the ability to "watch" keys. So one would start a thread
>>> that just watches the keys you want to reload on, and when they change
>>> that thread will see a response and can reload appropriately.
>>>
>>> https://coreos.com/etcd/docs/latest/dev-guide/api_reference_v3.html
>>
>>
>> Yep. Unfortunately, you won't be able to start an eventlet greenthread to
>> watch an etcd3/gRPC key. The python grpc library is incompatible with
>> eventlet/gevent's monkeypatching technique and causes a complete program
>> hang if you try to communicate with the etcd3 server from a greenlet. Fun!
>>
>> So, either use etcd2 (the no-longer-being-worked-on HTTP API) or don't use
>> eventlet in your client service.
>>
>> Best,
>> -jay
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> -- 
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][tripleo] initial discussion for a new periodic pipeline

2017-03-21 Thread Jeremy Stanley
On 2017-03-21 13:34:50 -0400 (-0400), Paul Belanger wrote:
[...]
> Today RDO does snapshot images.
[...]

Worth pointing out, if it's using Nodepool to do that, support for
snapshot images has been deprecated for a while and was dropped
completely in the latest release(s?).
http://lists.openstack.org/pipermail/openstack-infra/2016-December/004974.html
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl] Action required ! - Please submit Boston Forum sessions before April 2nd

2017-03-21 Thread Matt Riedemann

On 3/21/2017 4:09 PM, Lance Bragstad wrote:

I have a couple questions in addition to Matt's.

The keystone group is still trying to figure out what this means for us
and we discussed it in today's meeting [0]. Based on early feedback,
we're going to have less developer presence at the Forum than we did at
the PTG. Are these formal sessions intended to be the same format as
design session at previous summits?

In the past when we've organized ourselves for summits design sessions,
we typically got an email saying "you have these rooms at these times".
From there we filter our topics into like categories and shuffle them
around until the schedule looks right.

With the direction of the PTG, I'm not sure many developers were
expecting to have those types of technical discussions at the forum
(which could be why early developer attendance confirmation is lower).

Am I misunderstanding something?

[0] 
http://eavesdrop.openstack.org/meetings/keystone/2017/keystone.2017-03-21-18.00.log.html#l-150



We talked about this a bit in the nova channel today too.

I'm approaching this like the cross-project days at the design summit in 
the past where we'd propose topics and the TC would vote on them and 
they'd get scheduled, and then the rest of the design summit was for 
vertical-team discussions where we say how many rooms we want and then 
we schedule our sessions ourselves in cheddar.


For the Forum, I think we'll be submitting maybe three topics that I 
know of right now:


* cells (mostly project-specific but involves operators)
* placement (cross-project and operator involvement)
* limits (whole-of-openstack and operators/users)

Beyond those three, I don't plan on requesting any other nova-specific 
sessions (those aren't all nova-specific anyway).


For any of the nova developers that will be there, which will be a much 
smaller number than previous summits because of the format and because 
we did the PTG, I assume we'll talk about Pike development status and 
priorities during the downtime we have between the actual scheduled 
sessions that involve more than just nova (so we'll talk about the nova 
stuff in the free room area in other words).


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-21 Thread Davanum Srinivas
Jay,

the /v3alpha HTTP API  (grpc-gateway) supports watch
https://coreos.com/etcd/docs/latest/dev-guide/apispec/swagger/rpc.swagger.json

-- Dims

On Tue, Mar 21, 2017 at 5:22 PM, Jay Pipes  wrote:
> On 03/21/2017 04:29 PM, Clint Byrum wrote:
>>
>> Excerpts from Doug Hellmann's message of 2017-03-15 15:35:13 -0400:
>>>
>>> Excerpts from Thomas Herve's message of 2017-03-15 09:41:16 +0100:

 On Wed, Mar 15, 2017 at 12:05 AM, Joshua Harlow 
 wrote:

> * How does reloading work (does it)?


 No. There is nothing that we can do in oslo that will make services
 magically reload configuration. It's also unclear to me if that's
 something to do. In a containerized environment, wouldn't it be
 simpler to deploy new services? Otherwise, supporting signal based
 reload as we do today should be trivial.
>>>
>>>
>>> Reloading works today with files, that's why the question is important
>>> to think through. There is a special flag to set on options that are
>>> "mutable" and then there are functions within oslo.config to reload.
>>> Those are usually triggered when a service gets a SIGHUP or something
>>> similar.
>>>
>>> We need to decide what happens to a service's config when that API
>>> is used and the backend is etcd. Maybe nothing, because every time
>>> any config option is accessed the read goes all the way through to
>>> etcd? Maybe a warning is logged because we don't support reloads?
>>> Maybe an error is logged? Or maybe we flush the local cache and start
>>> reading from etcd on future accesses?
>>>
>>
>> etcd provides the ability to "watch" keys. So one would start a thread
>> that just watches the keys you want to reload on, and when they change
>> that thread will see a response and can reload appropriately.
>>
>> https://coreos.com/etcd/docs/latest/dev-guide/api_reference_v3.html
>
>
> Yep. Unfortunately, you won't be able to start an eventlet greenthread to
> watch an etcd3/gRPC key. The python grpc library is incompatible with
> eventlet/gevent's monkeypatching technique and causes a complete program
> hang if you try to communicate with the etcd3 server from a greenlet. Fun!
>
> So, either use etcd2 (the no-longer-being-worked-on HTTP API) or don't use
> eventlet in your client service.
>
> Best,
> -jay
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-21 Thread Jay Pipes

On 03/21/2017 04:29 PM, Clint Byrum wrote:

Excerpts from Doug Hellmann's message of 2017-03-15 15:35:13 -0400:

Excerpts from Thomas Herve's message of 2017-03-15 09:41:16 +0100:

On Wed, Mar 15, 2017 at 12:05 AM, Joshua Harlow  wrote:


* How does reloading work (does it)?


No. There is nothing that we can do in oslo that will make services
magically reload configuration. It's also unclear to me if that's
something to do. In a containerized environment, wouldn't it be
simpler to deploy new services? Otherwise, supporting signal based
reload as we do today should be trivial.


Reloading works today with files, that's why the question is important
to think through. There is a special flag to set on options that are
"mutable" and then there are functions within oslo.config to reload.
Those are usually triggered when a service gets a SIGHUP or something similar.

We need to decide what happens to a service's config when that API
is used and the backend is etcd. Maybe nothing, because every time
any config option is accessed the read goes all the way through to
etcd? Maybe a warning is logged because we don't support reloads?
Maybe an error is logged? Or maybe we flush the local cache and start
reading from etcd on future accesses?



etcd provides the ability to "watch" keys. So one would start a thread
that just watches the keys you want to reload on, and when they change
that thread will see a response and can reload appropriately.

https://coreos.com/etcd/docs/latest/dev-guide/api_reference_v3.html


Yep. Unfortunately, you won't be able to start an eventlet greenthread 
to watch an etcd3/gRPC key. The python grpc library is incompatible with 
eventlet/gevent's monkeypatching technique and causes a complete program 
hang if you try to communicate with the etcd3 server from a greenlet. Fun!


So, either use etcd2 (the no-longer-being-worked-on HTTP API) or don't 
use eventlet in your client service.


Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl] Action required ! - Please submit Boston Forum sessions before April 2nd

2017-03-21 Thread Lance Bragstad
I have a couple questions in addition to Matt's.

The keystone group is still trying to figure out what this means for us and
we discussed it in today's meeting [0]. Based on early feedback, we're
going to have less developer presence at the Forum than we did at the PTG.
Are these formal sessions intended to be the same format as design session
at previous summits?

In the past when we've organized ourselves for summits design sessions, we
typically got an email saying "you have these rooms at these times". From
there we filter our topics into like categories and shuffle them around
until the schedule looks right.

With the direction of the PTG, I'm not sure many developers were expecting
to have those types of technical discussions at the forum (which could be
why early developer attendance confirmation is lower).

Am I misunderstanding something?

[0]
http://eavesdrop.openstack.org/meetings/keystone/2017/keystone.2017-03-21-18.00.log.html#l-150

On Tue, Mar 21, 2017 at 3:40 PM, Matt Riedemann  wrote:

> On 3/21/2017 11:40 AM, Emilien Macchi wrote:
>
>> Sorry for duplicating the original e-mail from User Committee, but we
>> want to make sure all projects are aware about the deadline.
>>
>> http://lists.openstack.org/pipermail/user-committee/2017-Mar
>> ch/001856.html
>>
>> PTLs (and everyone), please make sure topics are submitted before April
>> 2nd.
>> Please let us know any question,
>>
>> Thanks!
>>
>>
> Do we need to submit formal sessions to forumtopics.o.o for the upstream
> contributor / new-comer session blocks laid out in Kendall's email? I had
> assumed we already said 'yes we want a slot' and then Kendall is going to
> sort that all out.
>
> --
>
> Thanks,
>
> Matt
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ptl] Action required ! - Please submit Boston Forum sessions before April 2nd

2017-03-21 Thread Matt Riedemann

On 3/21/2017 11:40 AM, Emilien Macchi wrote:

Sorry for duplicating the original e-mail from User Committee, but we
want to make sure all projects are aware about the deadline.

http://lists.openstack.org/pipermail/user-committee/2017-March/001856.html

PTLs (and everyone), please make sure topics are submitted before April 2nd.
Please let us know any question,

Thanks!



Do we need to submit formal sessions to forumtopics.o.o for the upstream 
contributor / new-comer session blocks laid out in Kendall's email? I 
had assumed we already said 'yes we want a slot' and then Kendall is 
going to sort that all out.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [craton] Nomination of Thomas Maddox as Craton core

2017-03-21 Thread Jim Baker
*I nominate Thomas Maddox as a core reviewer for the Craton project.*

Thomas has shown extensive knowledge of Craton, working across a range of
issues in the core service, including down to the database modeling; the
client; and corresponding bugs, blueprints, and specs. Perhaps most notably
he has contributed a number of end-to-end patches, such as his work with
project support.
https://review.openstack.org/#/q/owner:thomas.maddox

He has also expertly helped across a range of reviews, while always being
amazingly positive with other team members and potential contributors:
https://review.openstack.org/#/q/reviewer:thomas.maddox

Other details can be found here on his contributions:
http://stackalytics.com/report/users/thomas-maddox

In my opinion, Thomas has proven that he will make a fantastic addition to
the core review team. In particular, I'm confident Thomas will help further
improve the velocity for our project as a whole as a core reviewer. I hope
others concur with me in this assessment!

- Jim
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-21 Thread Clint Byrum
Excerpts from Sean Dague's message of 2017-03-15 08:54:55 -0400:
> On 03/15/2017 02:16 AM, Clint Byrum wrote:
> > Excerpts from Monty Taylor's message of 2017-03-15 04:36:24 +0100:
> >> On 03/14/2017 06:04 PM, Davanum Srinivas wrote:
> >>> Team,
> >>>
> >>> So one more thing popped up again on IRC:
> >>> https://etherpad.openstack.org/p/oslo.config_etcd_backend
> >>>
> >>> What do you think? interested in this work?
> >>>
> >>> Thanks,
> >>> Dims
> >>>
> >>> PS: Between this thread and the other one about Tooz/DLM and
> >>> os-lively, we can probably make a good case to add etcd as a base
> >>> always-on service.
> >>
> >> As I mentioned in the other thread, there was specific and strong
> >> anti-etcd sentiment in Tokyo which is why we decided to use an
> >> abstraction. I continue to be in favor of us having one known service in
> >> this space, but I do think that it's important to revisit that decision
> >> fully and in context of the concerns that were raised when we tried to
> >> pick one last time.
> >>
> >> It's worth noting that there is nothing particularly etcd-ish about
> >> storing config that couldn't also be done with zk and thus just be an
> >> additional api call or two added to Tooz with etcd and zk drivers for it.
> >>
> > 
> > Combine that thought with the "please have an ingest/export" thought,
> > and I think you have a pretty operator-friendly transition path. Would
> > be pretty great to have a release of OpenStack that just lets you add
> > an '[etcd]', or '[config-service]' section maybe, to your config files,
> > and then once you've fully migrated everything, lets you delete all the
> > other sections. Then the admin nodes still have the full configs and
> > one can just edit configs in git and roll them out by ingesting.
> > 
> > (Then the magical rainbow fairy ponies teach our services to watch their
> > config service for changes and restart themselves).
> 
> Make sure to add:
> 
> ... (after fully quiescing, when they are not processing any inflight
> work, when they are part of a pool so that they can be rolling restarted
> without impacting other services trying to connect to them, with a
> rollback to past config should the new config cause a crash).
> 
> There are a ton of really interesting things about a network registry,
> that makes many things easier. However, from an operational point of
> view I would be concerned about the idea of services restarting
> themselves in a non orchestrated manner. Or that a single key set in the
> registry triggers a complete reboot of the cluster. It's definitely less
> clear to understand the linkage of the action that took down your cloud
> and why when the operator isn't explicit about "and restart this service
> now".
> 

It's big and powerful and scary. That's for sure. But it's not _that_
different than an Ansible playbook in its ability to massively complicate
or uncomplicate your life with a tiny amount of code. :)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][security] Encryption in Zuul v3

2017-03-21 Thread David Moreau Simard
I don't have a horse in this race or a strong opinion on the topic, in
fact I'm admittedly not very knowledgeable when it comes to low-level
encryption things.

However, I did have a question, even if just to generate discussion.
Did we ever consider simply leaving secrets out of Zuul and offloading
that "burden" to something else ?

For example, end-users could use something like git-crypt [1] to crypt
files in their git repos and Zuul could have a mean to decrypt them at
runtime.
There is also ansible-vault [2] that could perhaps be leveraged.

Just trying to make sure we're not re-inventing any wheels,
implementing crypto is usually not straightfoward.

[1]: https://www.agwa.name/projects/git-crypt/
[2]: http://docs.ansible.com/ansible/playbooks_vault.html

David Moreau Simard
Senior Software Engineer | Openstack RDO

dmsimard = [irc, github, twitter]


On Tue, Mar 21, 2017 at 12:36 PM, James E. Blair  wrote:
> Hi,
>
> In working on the implementation of the encrypted secrets feature of
> Zuul v3, I have found some things that warrant further discussion.  It's
> important to be deliberate about this and I welcome any feedback.
>
> For reference, here is the relevant portion of the Zuul v3 spec:
>
> http://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html#secrets
>
> And here is an implementation of that:
>
> https://review.openstack.org/#/q/status:open+topic:secrets+project:openstack-infra/zuul
>
> The short version is that we want to allow users to store private keys
> in the public git repos which Zuul uses to run jobs.  To do this, we
> propose to use asymmetric cryptography (RSA) to encrypt the data.  The
> specification suggests implementing PKCS#1-OAEP, a standard for
> implementing RSA encryption.
>
> Note that RSA is not able to encrypt a message longer than the key, and
> PKCS#1 includes some overhead which eats into that.  If we use 4096 bit
> RSA keys in Zuul, we will be able to encrypt 3760 bits (or 470 bytes) of
> information.
>
> Further, note that value only holds if we use SHA-1.  It has been
> suggested that we may want to consider using SHA-256 with PKCS#1.  If we
> do, we will be able to encrypt slightly less data.  However, I'm not
> sure that the Python cryptography library allows this (yet?).  Also, see
> this answer for why it may not be necessary to use SHA-256 (and also,
> why we may want to anyway):
>
> https://security.stackexchange.com/questions/112029/should-sha-1-be-used-with-rsa-oaep
>
> One thing to note is that the OpenSSL CLI utility uses SHA-1.  Right
> now, I have a utility script which uses that to encrypt secrets so that
> it's easy for anyone to encrypt a secret without installing many
> dependencies.  Switching to another hash function would probably mean we
> wouldn't be able to use that anymore.  But that's also true for other
> systems (see below).
>
> In short, PKCS#1 pros: Simple, nicely packaged asymmetric encryption,
> hides plaintext message length (up to its limit).  Cons: limited to 470
> bytes (or less).
>
> Generally, when faced with the prospect of encrypting longer messages,
> the advice is to adopt a hybrid encryption scheme (as opposed to, say,
> chaining RSA messages together, or increasing the RSA key size) which
> uses symmetric encryption with a single-use key for the message and
> asymmetric encryption to hide the key.  If we want Zuul to support the
> encryption of longer secrets, we may want to adopt the hybrid approach.
> A frequent hybrid approach is to encrypt the message with AES, and then
> encrypt the AES key with RSA.
>
> The hiera-eyaml work which originally inspired some of this is based on
> PKCS#7 with AES as the cipher -- ultimately a hybrid approach.  An
> interesting aspect of that implementation is that the use of PKCS#7 as a
> message passing format allows for multiple possible underlying ciphers
> since the message is wrapped in ASN.1 and is self-descriptive.  We might
> have simply chosen to go with that except that there don't seem to be
> many good options for implementing this in Python, largely because of
> the nightmare that is ASN.1 parsing.
>
> The system we have devised for including encrypted content in our YAML
> files involves a YAML tag which specifies the encryption scheme.  So we
> can evolve our use to add or remove systems as needed in the future.
>
> So to break this down into a series of actionable questions:
>
> 1) Do we want a system to support encrypting longer secrets?  Our PKCS#1
> system supports up to 470 bytes.  That should be sufficient for most
> passwords and API keys, but unlikely to be sufficient for some
> certificate related systems, etc.
>
> 2) If so, what system should we use?
>
>2.1a) GPG?  This has hybrid encryption and transport combined.
>Implementation is likely to be a bit awkward, probably involving
>popen to external processes.
>
>2.1b) RSA+AES?  This recommendation from the pycryptodome
>documentation illustrates a typical 

Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-21 Thread Clint Byrum
Excerpts from Doug Hellmann's message of 2017-03-15 15:35:13 -0400:
> Excerpts from Thomas Herve's message of 2017-03-15 09:41:16 +0100:
> > On Wed, Mar 15, 2017 at 12:05 AM, Joshua Harlow  
> > wrote:
> > 
> > > * How does reloading work (does it)?
> > 
> > No. There is nothing that we can do in oslo that will make services
> > magically reload configuration. It's also unclear to me if that's
> > something to do. In a containerized environment, wouldn't it be
> > simpler to deploy new services? Otherwise, supporting signal based
> > reload as we do today should be trivial.
> 
> Reloading works today with files, that's why the question is important
> to think through. There is a special flag to set on options that are
> "mutable" and then there are functions within oslo.config to reload.
> Those are usually triggered when a service gets a SIGHUP or something similar.
> 
> We need to decide what happens to a service's config when that API
> is used and the backend is etcd. Maybe nothing, because every time
> any config option is accessed the read goes all the way through to
> etcd? Maybe a warning is logged because we don't support reloads?
> Maybe an error is logged? Or maybe we flush the local cache and start
> reading from etcd on future accesses?
> 

etcd provides the ability to "watch" keys. So one would start a thread
that just watches the keys you want to reload on, and when they change
that thread will see a response and can reload appropriately.

https://coreos.com/etcd/docs/latest/dev-guide/api_reference_v3.html

see "service Watch"

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][security] Encryption in Zuul v3

2017-03-21 Thread Clint Byrum
Excerpts from corvus's message of 2017-03-21 09:36:41 -0700:
> Hi,
> 
> In working on the implementation of the encrypted secrets feature of
> Zuul v3, I have found some things that warrant further discussion.  It's
> important to be deliberate about this and I welcome any feedback.
> 

Thanks for looking into this deeply.

> For reference, here is the relevant portion of the Zuul v3 spec:
> 
> http://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html#secrets
> 
> And here is an implementation of that:
> 
> https://review.openstack.org/#/q/status:open+topic:secrets+project:openstack-infra/zuul
> 
> The short version is that we want to allow users to store private keys
> in the public git repos which Zuul uses to run jobs.  To do this, we
> propose to use asymmetric cryptography (RSA) to encrypt the data.  The
> specification suggests implementing PKCS#1-OAEP, a standard for
> implementing RSA encryption.
> 
> Note that RSA is not able to encrypt a message longer than the key, and
> PKCS#1 includes some overhead which eats into that.  If we use 4096 bit
> RSA keys in Zuul, we will be able to encrypt 3760 bits (or 470 bytes) of
> information.
> 

Hm, I must have read the standard wrong, I thought it was 480 bytes. I
very much trust your reading of it and experimentation with it above my
skimming.

> Further, note that value only holds if we use SHA-1.  It has been
> suggested that we may want to consider using SHA-256 with PKCS#1.  If we
> do, we will be able to encrypt slightly less data.  However, I'm not
> sure that the Python cryptography library allows this (yet?).  Also, see
> this answer for why it may not be necessary to use SHA-256 (and also,
> why we may want to anyway):
> 
> https://security.stackexchange.com/questions/112029/should-sha-1-be-used-with-rsa-oaep
> 

I think our hand will get forced into SHA256, and if we can land SHA256
support in cryptography's PKCS#1, we should. But until that's done, SHA1
is cryptographically sound, and I think we don't have time to get things
into perfect shape, so as long as we can accommodate SHA256 when it's
ready, seems like the mathemeticians think SHA1 is fine in PKCS#1. 

> One thing to note is that the OpenSSL CLI utility uses SHA-1.  Right
> now, I have a utility script which uses that to encrypt secrets so that
> it's easy for anyone to encrypt a secret without installing many
> dependencies.  Switching to another hash function would probably mean we
> wouldn't be able to use that anymore.  But that's also true for other
> systems (see below).
> 

We may just have to require python and a recent cryptography if people
are unable to use a SHA1 PKCS#1.

> In short, PKCS#1 pros: Simple, nicely packaged asymmetric encryption,
> hides plaintext message length (up to its limit).  Cons: limited to 470
> bytes (or less).
> 

I'd list the confusion around SHA1 as a con as well.

> Generally, when faced with the prospect of encrypting longer messages,
> the advice is to adopt a hybrid encryption scheme (as opposed to, say,
> chaining RSA messages together, or increasing the RSA key size) which
> uses symmetric encryption with a single-use key for the message and
> asymmetric encryption to hide the key.  If we want Zuul to support the
> encryption of longer secrets, we may want to adopt the hybrid approach.
> A frequent hybrid approach is to encrypt the message with AES, and then
> encrypt the AES key with RSA.
> 
> The hiera-eyaml work which originally inspired some of this is based on
> PKCS#7 with AES as the cipher -- ultimately a hybrid approach.  An
> interesting aspect of that implementation is that the use of PKCS#7 as a
> message passing format allows for multiple possible underlying ciphers
> since the message is wrapped in ASN.1 and is self-descriptive.  We might
> have simply chosen to go with that except that there don't seem to be
> many good options for implementing this in Python, largely because of
> the nightmare that is ASN.1 parsing.
> 
> The system we have devised for including encrypted content in our YAML
> files involves a YAML tag which specifies the encryption scheme.  So we
> can evolve our use to add or remove systems as needed in the future.
> 
> So to break this down into a series of actionable questions:
> 
> 1) Do we want a system to support encrypting longer secrets?  Our PKCS#1
> system supports up to 470 bytes.  That should be sufficient for most
> passwords and API keys, but unlikely to be sufficient for some
> certificate related systems, etc.
> 

Ultimately yes. The issue brought up earlier about SSH keys suggests
we'll likely need to be able to handle private keys that are 4096 bits
long.

> 2) If so, what system should we use?
> 
>2.1a) GPG?  This has hybrid encryption and transport combined.
>Implementation is likely to be a bit awkward, probably involving
>popen to external processes.
> 

As awkward as popening the cli's can be, there are at least libraries
for it:

http://pythonhosted.org/gnupg/

Re: [openstack-dev] [infra][security] Encryption in Zuul v3

2017-03-21 Thread James E. Blair
Clint Byrum  writes:

> Excerpts from Matthieu Huin's message of 2017-03-21 18:43:49 +0100:
>> Hello James,
>> 
>> Thanks for opening the discussion on this topic. I'd like to mention that a
>> very common type of secrets that are used in Continuous Deployments
>> scenarios are SSH keys. Correct me if I am wrong, but PKCS#1 wouldn't
>> qualify if standard keys were to be stored.
>
> You could store a key, just not a 4096 bit key.
>
> PKCS#1 has a header/padding of something like 12 bytes, and then you
> need a hash in there, so for SHA1 that's 160 bits or 20 bytes, SHA256
> is 256 bites so 32 bytes. So with a 4096 bit (512 bytes) Zuul key, you
> can encrypt 480 bytes of plaintext, or 468 with sha256. That's enough
> for a 3072 bit (384 bytes) SSH key. An uncommon size, but RSA says'
> they're good past 2030:
>
> https://www.emc.com/emc-plus/rsa-labs/historical/twirl-and-rsa-key-size.htm
>
> It's a little cramped, but hey, this is the age of tiny houses, maybe we
> should make do with what we have.

There is that option, the option of adding another encryption system
capable of storing larger keys, or this third option:

Because we wanted continuous deployment to be a first-class feature in
Zuul v3, we added this section of the spec which specifies that Zuul
should have a number of keys automatically available for use in a CD
system:

  
http://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html#continuous-deployment

We haven't started implementing that yet, and it probably needs a little
bit of updating before we do, but I think the fundamental idea is still
sound and could be accomplished.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][security] Encryption in Zuul v3

2017-03-21 Thread Clint Byrum
Excerpts from Matthieu Huin's message of 2017-03-21 18:43:49 +0100:
> Hello James,
> 
> Thanks for opening the discussion on this topic. I'd like to mention that a
> very common type of secrets that are used in Continuous Deployments
> scenarios are SSH keys. Correct me if I am wrong, but PKCS#1 wouldn't
> qualify if standard keys were to be stored.

You could store a key, just not a 4096 bit key.

PKCS#1 has a header/padding of something like 12 bytes, and then you
need a hash in there, so for SHA1 that's 160 bits or 20 bytes, SHA256
is 256 bites so 32 bytes. So with a 4096 bit (512 bytes) Zuul key, you
can encrypt 480 bytes of plaintext, or 468 with sha256. That's enough
for a 3072 bit (384 bytes) SSH key. An uncommon size, but RSA says'
they're good past 2030:

https://www.emc.com/emc-plus/rsa-labs/historical/twirl-and-rsa-key-size.htm

It's a little cramped, but hey, this is the age of tiny houses, maybe we
should make do with what we have.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-sfc] About insertion modes and SFC Encapsulation

2017-03-21 Thread Duarte Cardoso, Igor
Below,

Best regards,
Igor.

From: Vikash Kumar [mailto:vikash.ku...@oneconvergence.com]
Sent: Tuesday, March 21, 2017 6:29 PM
To: OpenStack Development Mailing List (not for usage questions) 
; Duarte Cardoso, Igor 

Subject: Re: [openstack-dev] [networking-sfc] About insertion modes and SFC 
Encapsulation

Also, for TAP devices, they can be deployed in both active ( forward traffic 
back to networking​ devices) and passive mode . Our *current BP* scope is only 
for passive TAP. Apart from these two, there are other mode of deployment s 
also.

Others reading can add.

On Tue, Mar 21, 2017, 11:16 PM Vikash Kumar 
> wrote:
Hi Igor,


On Tue, Mar 21, 2017 at 10:02 PM, Duarte Cardoso, Igor 
> wrote:
Hi Vikash,

It’s best to start with RFC 7665.

NSH decouples traffic forwarding from both the internals of packets and service 
functions. A special entity called SFF will take on that job. L2/L3 then become 
something that the SFF might have to deal with it.

​which means it can co-exist with (L2/L3 insertion mode) and not necessarily 
mutually exclusive.
[IDC] It can’t because it shouldn’t be captured in the API. When you create a 
port-chain you have to specify the encap protocol that will render it as a 
whole, let’s say NSH. Then you go to the port-pairs and specify whether they 
support that protocol or whether they have to be proxied in an L2 or an L3 way 
(and these three possibilities are mutually exclusive).
If the port-pair supports NSH, you don’t specify anything about L2 or L3 
insertion modes. The logical SFFs (physically OVS e.g.) will be configured with 
the flows that will be able to forward traffic to the right service function – 
those flows can look like an L2 if the port-pair is on the current node and we 
just need to push NSH on Ethernet, or maybe they will look like an L4 insertion 
mode, if we have to cross nodes using VXLAN – but this will be 
backend/deployment/environment specific, and that’s what I mean by “L2/L3 then 
become something that the SFF might have to deal with it”. It’s not something 
to capture in the API.

​

However, networking-sfc API doesn’t expose or require details about individual 
SFC dataplane elements such as the SFF… it is up to the backend/driver to know 
those low-level details.

​Agree.
​

NSH doesn’t classify and forward traffic itself. It’s only a header that 
identifies what and where in the chain the packet belongs to/is (plus other 
goodies such as metadata). Classifier will classify, SFF will forward.

​   I was referring to NSH in totality and not excluding SFF 
(https://tools.ietf.org/html/draft-ietf-sfc-nsh-12). Look like I extended the 
scope of NSH in term of  SFC. ​



By the way, I left a question on the tap blueprint whiteboard, I’ll copy it 
here too:
“Is there a use case for "tap chains"? I.e. not only you send traffic to your 
tap function, but then your tap function also sends traffic to a next hop too, 
so a full chain starts after traffic gets tapped at the first chain (the first 
chain also continues).”
I suppose the answer is no since you mentioned “Note - TAP SFs do not forward 
packet”, but I’m happy to hear extended info about this – from anyone reading.

Best regards,
Igor.

From: Vikash Kumar 
[mailto:vikash.ku...@oneconvergence.com]
Sent: Tuesday, March 21, 2017 3:32 PM
To: OpenStack Development Mailing List (not for usage questions) 
>
Subject: Re: [openstack-dev] [networking-sfc] About insertion modes and SFC 
Encapsulation

Hi,
   Moving definition of SF from port-pair to port-pair-group looks good.
   TAP is also an insertion mode like L2/L3 but since it simplifies to keep 
'tap-enabled' field also in port-pair-group, so it should be fine from 
implementation point of view (Note - TAP SFs do not forward packet). TAP 
enabled and L2/L3 insertion mode should be mutually exclusive.
   According to IETF draft NSH can classify & forward traffic (correct ?) but 
then the draft assumes uniformity of working of devices (which IMHO refers L3) 
which doesn't cover the entire use case. Can insertion mode (L2/L3) & traffic 
encapsulation(NSH) co-exist also ?



On Mon, Mar 20, 2017 at 11:35 PM, Cathy Zhang 
> wrote:
Hi Igor,

Moving the correlation from port-pair to port-pair-group makes sense. In the 
future I think we should add all new attributes for a SF to 
port-pair-group-param.

But I think L2/L3 is different from encap type NSH or MPLS. An L3 type SF can 
support either NSH or MPLS. I would suggest the following:

port-pair-group (port-pair-group-params):
insertion-mode:
- L2
- 

Re: [openstack-dev] [networking-sfc] About insertion modes and SFC Encapsulation

2017-03-21 Thread Vikash Kumar
Also, for TAP devices, they can be deployed in both active ( forward
traffic back to networking​ devices) and passive mode . Our *current BP*
scope is only for passive TAP. Apart from these two, there are other mode
of deployment s also.

Others reading can add.

On Tue, Mar 21, 2017, 11:16 PM Vikash Kumar 
wrote:

Hi Igor,



On Tue, Mar 21, 2017 at 10:02 PM, Duarte Cardoso, Igor <
igor.duarte.card...@intel.com> wrote:

Hi Vikash,



It’s best to start with RFC 7665.



NSH decouples traffic forwarding from both the internals of packets and
service functions. A special entity called SFF will take on that job. L2/L3
then become something that the SFF might have to deal with it.


​which means it can co-exist with (L2/L3 insertion mode) and not
necessarily mutually exclusive.
​


However, networking-sfc API doesn’t expose or require details about
individual SFC dataplane elements such as the SFF… it is up to the
backend/driver to know those low-level details.


​Agree.

​



NSH doesn’t classify and forward traffic itself. It’s only a header that
identifies what and where in the chain the packet belongs to/is (plus other
goodies such as metadata). Classifier will classify, SFF will forward.


​   I was referring to NSH in totality and not excluding SFF (
https://tools.ietf.org/html/draft-ietf-sfc-nsh-12). Look like I extended
the scope of NSH in term of  SFC. ​






By the way, I left a question on the tap blueprint whiteboard, I’ll copy it
here too:

“Is there a use case for "tap chains"? I.e. not only you send traffic to
your tap function, but then your tap function also sends traffic to a next
hop too, so a full chain starts after traffic gets tapped at the first
chain (the first chain also continues).”

I suppose the answer is no since you mentioned “Note - TAP SFs do not
forward packet”, but I’m happy to hear extended info about this – from
anyone reading.



Best regards,

Igor.



*From:* Vikash Kumar [mailto:vikash.ku...@oneconvergence.com]
*Sent:* Tuesday, March 21, 2017 3:32 PM
*To:* OpenStack Development Mailing List (not for usage questions) <
openstack-dev@lists.openstack.org>
*Subject:* Re: [openstack-dev] [networking-sfc] About insertion modes and
SFC Encapsulation



Hi,

   Moving definition of SF from port-pair to port-pair-group looks good.

   TAP is also an insertion mode like L2/L3 but since it simplifies to keep
'tap-enabled' field also in port-pair-group, so it should be fine from
implementation point of view (Note - TAP SFs do not forward packet). TAP
enabled and L2/L3 insertion mode should be mutually exclusive.

   According to IETF draft NSH can classify & forward traffic (correct ?)
but then the draft assumes uniformity of working of devices (which IMHO
refers L3) which doesn't cover the entire use case. Can insertion mode
(L2/L3) & traffic encapsulation(NSH) co-exist also ?






On Mon, Mar 20, 2017 at 11:35 PM, Cathy Zhang 
wrote:

Hi Igor,



Moving the correlation from port-pair to port-pair-group makes sense. In
the future I think we should add all new attributes for a SF to
port-pair-group-param.



But I think L2/L3 is different from encap type NSH or MPLS. An L3 type SF
can support either NSH or MPLS. I would suggest the following:



port-pair-group (port-pair-group-params):

insertion-mode:

- L2

- L3 (default)

   Correlation:

- MPLS

- NSH

tap-enabled:

- False (default)

- True



Thanks,

Cathy



*From:* Duarte Cardoso, Igor [mailto:igor.duarte.card...@intel.com]
*Sent:* Monday, March 20, 2017 8:02 AM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* [openstack-dev] [networking-sfc] About insertion modes and SFC
Encapsulation



Hi networking-sfc,



At the latest IRC meeting [1] it was agreed to split TAP from the possible
insertion modes (initial spec version [2]).



I took the ARs to propose coexistence of insertion modes, correlation and
(now) a new tap-enabled attribute, and send this email about possible
directions.



Here are my thoughts, let me know yours:



1.   My expectation for future PP and PPG if TAP+insertion modes go
ahead and nothing else changes (only relevant details outlined):



port-pair (service-function-params):

correlation:

- MPLS

- None (default)

port-pair-group (port-pair-group-params):

insertion-mode:

- L2

- L3 (default)

tap-enabled:

- False (default)

- True



2.   What I propose for future PP and PPG (only relevant details
outlined):



port-pair 

Re: [openstack-dev] [TripleO][release][deployment] Packaging problems due to branch/release ordering

2017-03-21 Thread Emilien Macchi
On Mon, Mar 13, 2017 at 12:29 PM, Alan Pevec  wrote:
> 2017-03-09 14:58 GMT+01:00 Jeremy Stanley :
>> In the past we addressed this by automatically merging the release
>> tag back into master, but we stopped doing that a cycle ago because
>> it complicated release note generation.
>
> Also this was including RC >= 2 and final tags so as soon as the first
> stable maintenance version was released, master was again lower
> version.

topic sounds staled.
Alan,  do we have an ETA on the RDO workaround?

Thanks,

> Cheers,
> Alan
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][tripleo] initial discussion for a new periodic pipeline

2017-03-21 Thread Sagi Shnaidman
Paul,
if we run 750 ovb jobs per day, than adding 12 more will be less than 2%
increase. I don't believe it will be a serious issue.

Thanks

On Tue, Mar 21, 2017 at 7:34 PM, Paul Belanger 
wrote:

> On Tue, Mar 21, 2017 at 12:40:39PM -0400, Wesley Hayutin wrote:
> > On Tue, Mar 21, 2017 at 12:03 PM, Emilien Macchi 
> wrote:
> >
> > > On Mon, Mar 20, 2017 at 3:29 PM, Paul Belanger 
> > > wrote:
> > > > On Sun, Mar 19, 2017 at 06:54:27PM +0200, Sagi Shnaidman wrote:
> > > >> Hi, Paul
> > > >> I would say that real worthwhile try starts from "normal" priority,
> > > because
> > > >> we want to run promotion jobs more *often*, not more *rarely* which
> > > happens
> > > >> with low priority.
> > > >> In addition the initial idea in the first mail was running them each
> > > after
> > > >> other almost, not once a day like it happens now or with "low"
> priority.
> > > >>
> > > > As I've said, my main reluctance is is how the gate will react if we
> > > create a
> > > > new pipeline with the same priority as our check pipeline.  I would
> much
> > > rather
> > > > since on caution, default to 'low', see how things react for a day /
> > > week /
> > > > month, then see what it would like like a normal.  I want us to be
> > > caution about
> > > > adding a new pipeline, as it dynamically changes how our existing
> > > pipelines
> > > > function.
> > > >
> > > > Further more, this is actually a capacity issue for
> > > tripleo-test-cloud-rh1,
> > > > there currently too many jobs running for the amount of hardware. If
> > > these jobs
> > > > were running on our donated clouds, we could get away with a low
> priority
> > > > periodic pipeline.
> > >
> > > multinode jobs are running under donated clouds but as you know ovb
> not.
> > > We want to keep ovb jobs in our promotion pipeline because they bring
> > > high value to the tests (ironic, ipv6, ssl, probably more).
> > >
> > > Another alternative would be to reduce it to one ovb job (ironic with
> > > introspection + ipv6 + ssl at minimum) and use the 4 multinode jobs
> > > into the promotion pipeline -instead of the 3 ovb.
> > >
> >
> > I'm +1 on using one ovb jobs + 4 multinode jobs.
> >
> >
> > >
> > > current: 3 ovb jobs running every night
> > > proposal: 18 ovb jobs per day
> > >
> > > The addition will cost us 15 jobs into rh1 load. Would it be
> acceptable?
> > >
> > > > Now, allow me to propose another solution.
> > > >
> > > > RDO project has their own version of zuul, which has the ability to
> do
> > > periodic
> > > > pipelines.  Since tripleo-test-cloud-rh2 is still around, and has OVB
> > > ability, I
> > > > would suggest configuring this promoting pipeline within RDO, as to
> not
> > > affect
> > > > the capacity of tripleo-test-cloud-rh1.  This now means, you can
> > > continuously
> > > > enqueue jobs at a rate of 4 hours, priority shouldn't matter as you
> are
> > > the only
> > > > jobs running on tripleo-test-cloud-rh2, resulting in faster
> promotions.
> > >
> > > Using RDO would also be an option. I'm just not sure about our
> > > available resources, maybe other can reply on this one.
> > >
> >
> > The purpose of the periodic jobs are two fold.
> > 1. ensure the latest built packages work
> > 2. ensure the tripleo check gates continue to work with out error
> >
> > Running the promotion in review.rdoproject would not cover #2.  The
> > rdoproject jobs
> > would be configured in slightly different ways from upstream tripleo.
> > Running the promotion
> > in ci.centos has the same issue.
> >
> Right, there is some leg work to use the images produced by opentack-infra
> in
> RDO, but that is straightforward. It would be the same build process that
> a 3rd
> party CI system does.  It would be a matter of copying nodepool.yaml from
> openstack-infra/project-config, and (this is harder) using
> nodepool-builder to
> build the images.  Today RDO does snapshot images.
>
> > Using tripleo-testcloud-rh2 I think is fine.
> >
> >
> > >
> > > > This also make sense, as packaging is done in RDO, and you are
> > > triggering Centos
> > > > CI things as a result.
> > >
> > > Yes, it would make sense. Right now we have zero TripleO testing when
> > > doing changes in RDO packages (we only run packstack and puppet jobs
> > > which is not enough). Again, I think it's a problem of capacity here.
> > >
> >
> > We made a pass at getting multinode jobs running in RDO with tripleo.
> That
> > was
> > initially not very successful and we chose to instead focus on upstream.
> > We *do*
> > have it on our list to gate packages from RDO builds with tripleo.  In
> the
> > short term
> > that gate will use rdocloud, in the long term we'd also like to gate w/
> > multinode nodepool jobs in RDO.
> >
> >
> >
> > >
> > > Thoughts?
> > >
> > > >> Thanks
> > > >>
> > > >> On Wed, Mar 15, 2017 at 11:16 PM, Paul Belanger <
> pabelan...@redhat.com>
> > > >> wrote:
> > > >>
> > > >> > On Wed, Mar 15, 2017 at 

Re: [openstack-dev] [infra][tripleo] initial discussion for a new periodic pipeline

2017-03-21 Thread Paul Belanger
On Tue, Mar 21, 2017 at 12:40:39PM -0400, Wesley Hayutin wrote:
> On Tue, Mar 21, 2017 at 12:03 PM, Emilien Macchi  wrote:
> 
> > On Mon, Mar 20, 2017 at 3:29 PM, Paul Belanger 
> > wrote:
> > > On Sun, Mar 19, 2017 at 06:54:27PM +0200, Sagi Shnaidman wrote:
> > >> Hi, Paul
> > >> I would say that real worthwhile try starts from "normal" priority,
> > because
> > >> we want to run promotion jobs more *often*, not more *rarely* which
> > happens
> > >> with low priority.
> > >> In addition the initial idea in the first mail was running them each
> > after
> > >> other almost, not once a day like it happens now or with "low" priority.
> > >>
> > > As I've said, my main reluctance is is how the gate will react if we
> > create a
> > > new pipeline with the same priority as our check pipeline.  I would much
> > rather
> > > since on caution, default to 'low', see how things react for a day /
> > week /
> > > month, then see what it would like like a normal.  I want us to be
> > caution about
> > > adding a new pipeline, as it dynamically changes how our existing
> > pipelines
> > > function.
> > >
> > > Further more, this is actually a capacity issue for
> > tripleo-test-cloud-rh1,
> > > there currently too many jobs running for the amount of hardware. If
> > these jobs
> > > were running on our donated clouds, we could get away with a low priority
> > > periodic pipeline.
> >
> > multinode jobs are running under donated clouds but as you know ovb not.
> > We want to keep ovb jobs in our promotion pipeline because they bring
> > high value to the tests (ironic, ipv6, ssl, probably more).
> >
> > Another alternative would be to reduce it to one ovb job (ironic with
> > introspection + ipv6 + ssl at minimum) and use the 4 multinode jobs
> > into the promotion pipeline -instead of the 3 ovb.
> >
> 
> I'm +1 on using one ovb jobs + 4 multinode jobs.
> 
> 
> >
> > current: 3 ovb jobs running every night
> > proposal: 18 ovb jobs per day
> >
> > The addition will cost us 15 jobs into rh1 load. Would it be acceptable?
> >
> > > Now, allow me to propose another solution.
> > >
> > > RDO project has their own version of zuul, which has the ability to do
> > periodic
> > > pipelines.  Since tripleo-test-cloud-rh2 is still around, and has OVB
> > ability, I
> > > would suggest configuring this promoting pipeline within RDO, as to not
> > affect
> > > the capacity of tripleo-test-cloud-rh1.  This now means, you can
> > continuously
> > > enqueue jobs at a rate of 4 hours, priority shouldn't matter as you are
> > the only
> > > jobs running on tripleo-test-cloud-rh2, resulting in faster promotions.
> >
> > Using RDO would also be an option. I'm just not sure about our
> > available resources, maybe other can reply on this one.
> >
> 
> The purpose of the periodic jobs are two fold.
> 1. ensure the latest built packages work
> 2. ensure the tripleo check gates continue to work with out error
> 
> Running the promotion in review.rdoproject would not cover #2.  The
> rdoproject jobs
> would be configured in slightly different ways from upstream tripleo.
> Running the promotion
> in ci.centos has the same issue.
> 
Right, there is some leg work to use the images produced by opentack-infra in
RDO, but that is straightforward. It would be the same build process that a 3rd
party CI system does.  It would be a matter of copying nodepool.yaml from
openstack-infra/project-config, and (this is harder) using nodepool-builder to
build the images.  Today RDO does snapshot images.

> Using tripleo-testcloud-rh2 I think is fine.
> 
> 
> >
> > > This also make sense, as packaging is done in RDO, and you are
> > triggering Centos
> > > CI things as a result.
> >
> > Yes, it would make sense. Right now we have zero TripleO testing when
> > doing changes in RDO packages (we only run packstack and puppet jobs
> > which is not enough). Again, I think it's a problem of capacity here.
> >
> 
> We made a pass at getting multinode jobs running in RDO with tripleo.  That
> was
> initially not very successful and we chose to instead focus on upstream.
> We *do*
> have it on our list to gate packages from RDO builds with tripleo.  In the
> short term
> that gate will use rdocloud, in the long term we'd also like to gate w/
> multinode nodepool jobs in RDO.
> 
> 
> 
> >
> > Thoughts?
> >
> > >> Thanks
> > >>
> > >> On Wed, Mar 15, 2017 at 11:16 PM, Paul Belanger 
> > >> wrote:
> > >>
> > >> > On Wed, Mar 15, 2017 at 03:42:32PM -0500, Ben Nemec wrote:
> > >> > >
> > >> > >
> > >> > > On 03/13/2017 02:29 PM, Sagi Shnaidman wrote:
> > >> > > > Hi, all
> > >> > > >
> > >> > > > I submitted a change: https://review.openstack.org/#/c/443964/
> > >> > > > but seems like it reached a point which requires an additional
> > >> > discussion.
> > >> > > >
> > >> > > > I had a few proposals, it's increasing period to 12 hours instead
> > of 4
> > >> > > > for start, and to leave it in regular 

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-21 Thread Kumari, Madhuri
It seems the term COE is a valid term now. I am in favor of having “openstack 
coe cluster” or “openstack container cluster”.
Using the command “infra” is too generic and doesn’t relate to what Magnum is 
doing exactly.

Regards,
Madhuri

From: Spyros Trigazis [mailto:strig...@gmail.com]
Sent: Tuesday, March 21, 2017 7:25 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [magnum][osc] What name to use for magnum commands 
in osc?

IMO, coe is a little confusing. It is a term used by people related somehow
to the magnum community. When I describe to users how to use magnum,
I spent a few moments explaining what we call coe.

I prefer one of the following:
* openstack magnum cluster create|delete|...
* openstack mcluster create|delete|...
* both the above

It is very intuitive for users because, they will be using an openstack cloud
and they will be wanting to use the magnum service. So, it only make sense
to type openstack magnum cluster or mcluster which is shorter.


On 21 March 2017 at 02:24, Qiming Teng 
> wrote:
On Mon, Mar 20, 2017 at 03:35:18PM -0400, Jay Pipes wrote:
> On 03/20/2017 03:08 PM, Adrian Otto wrote:
> >Team,
> >
> >Stephen Watson has been working on an magnum feature to add magnum commands 
> >to the openstack client by implementing a plugin:
> >
> >https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclient+osc
> >
> >In review of this work, a question has resurfaced, as to what the client 
> >command name should be for magnum related commands. Naturally, we’d like to 
> >have the name “cluster” but that word is already in use by Senlin.
>
> Unfortunately, the Senlin API uses a whole bunch of generic terms as
> top-level REST resources, including "cluster", "event", "action",
> "profile", "policy", and "node". :( I've warned before that use of
> these generic terms in OpenStack APIs without a central group
> responsible for curating the API would lead to problems like this.
> This is why, IMHO, we need the API working group to be ultimately
> responsible for preventing this type of thing from happening.
> Otherwise, there ends up being a whole bunch of duplication and same
> terms being used for entirely different things.
>

Well, I believe the name and namespaces used by Senlin is very clean.
Please see the following outputs. All commands are contained in the
cluster namespace to avoid any conflicts with any other projects.

On the other hand, is there any document stating that Magnum is about
providing clustering service? Why Magnum cares so much about the top
level noun if it is not its business?

From magnum's wiki page [1]:
"Magnum uses Heat to orchestrate an OS image which contains Docker
and Kubernetes and runs that image in either virtual machines or bare
metal in a cluster configuration."

Many services may offer clusters indirectly. Clusters is NOT magnum's focus,
but we can't refer to a collection of virtual machines or physical servers with
another name. Bay proven to be confusing to users. I don't think that magnum
should reserve the cluster noun, even if it was available.

[1] https://wiki.openstack.org/wiki/Magnum



$ openstack --help | grep cluster

  --os-clustering-api-version 

  cluster action list  List actions.
  cluster action show  Show detailed info about the specified action.
  cluster build info  Retrieve build information.
  cluster check  Check the cluster(s).
  cluster collect  Collect attributes across a cluster.
  cluster create  Create the cluster.
  cluster delete  Delete the cluster(s).
  cluster event list  List events.
  cluster event show  Describe the event.
  cluster expand  Scale out a cluster by the specified number of nodes.
  cluster list   List the user's clusters.
  cluster members add  Add specified nodes to cluster.
  cluster members del  Delete specified nodes from cluster.
  cluster members list  List nodes from cluster.
  cluster members replace  Replace the nodes in a cluster with
  specified nodes.
  cluster node check  Check the node(s).
  cluster node create  Create the node.
  cluster node delete  Delete the node(s).
  cluster node list  Show list of nodes.
  cluster node recover  Recover the node(s).
  cluster node show  Show detailed info about the specified node.
  cluster node update  Update the node.
  cluster policy attach  Attach policy to cluster.
  cluster policy binding list  List policies from cluster.
  cluster policy binding show  Show a specific policy that is bound to
  the specified cluster.
  cluster policy binding update  Update a policy's properties on a
  cluster.
  cluster policy create  Create a policy.
  cluster policy delete  Delete policy(s).
  cluster policy detach  Detach policy from cluster.
  cluster policy list  List policies that meet the criteria.
  cluster policy show  Show the policy details.
  cluster 

Re: [openstack-dev] [neutron][networking-l2gw] Unable to create release tag

2017-03-21 Thread Gary Kotton
Hi,
I am still unable to do this – this is after 
https://review.openstack.org/#/c/447279/ landed.
Any ideas?
Thanks
Gary

On 3/14/17, 3:04 PM, "Jeremy Stanley"  wrote:

On 2017-03-14 05:39:35 + (+), Gary Kotton wrote:
> I was asked to create a release tag for stable/ocata. This fails with:
[...]
>  ! [remote rejected] 10.0.0 -> 10.0.0 (prohibited by Gerrit)
[...]

The ACL for that repo doesn't seem to be configured to allow it
(yet):


http://git.openstack.org/cgit/openstack-infra/project-config/tree/gerrit/acls/openstack/networking-l2gw.config

The Infra Manual section documenting that permission is:

https://docs.openstack.org/infra/manual/creators.html#creation-of-tags

It also may be helpful to review the section on manually tagging
releases:

https://docs.openstack.org/infra/manual/drivers.html#tagging-a-release

Hope that helps!
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-sfc] About insertion modes and SFC Encapsulation

2017-03-21 Thread Henry Fourie
Igor,
  Inline.

-Louis

From: Duarte Cardoso, Igor [mailto:igor.duarte.card...@intel.com]
Sent: Monday, March 20, 2017 8:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [networking-sfc] About insertion modes and SFC 
Encapsulation

Hi networking-sfc,

At the latest IRC meeting [1] it was agreed to split TAP from the possible 
insertion modes (initial spec version [2]).

I took the ARs to propose coexistence of insertion modes, correlation and (now) 
a new tap-enabled attribute, and send this email about possible directions.

Here are my thoughts, let me know yours:


1.   My expectation for future PP and PPG if TAP+insertion modes go ahead 
and nothing else changes (only relevant details outlined):

port-pair (service-function-params):
correlation:
- MPLS
- None (default)
port-pair-group (port-pair-group-params):
insertion-mode:
- L2
- L3 (default)
tap-enabled:
- False (default)
- True


2.   What I propose for future PP and PPG (only relevant details outlined):

port-pair (service-function-params):

port-pair-group (port-pair-group-params):
mode:
- L2
- L3 (default)
- MPLS
- NSH
tap-enabled:
- False (default)
- True

With what's proposed in 2.:
- every combination will be possible with no clashes and no validation required.
- port-pair-groups will always group "homogeneous" sets of port-pairs, making 
load-balacing and next-hop processing simpler and consistent.
- the "forwarding" details of a Service Function are no longer dictated both by 
port-pair and port-pair-group, but rather only by port-pair-group.

LF: agree, it appears that L2, L3, MPLS, NSH are mutually exclusive.
Agree on tap-enabled.

Are there any use cases for having next-hop SF candidates (individual 
port-pairs) supporting different SFC Encapsulation protocols?
I understand, however, that removing correlation from port-pairs might not be 
ideal given that it's a subtractive API change.

[1] 
http://eavesdrop.openstack.org/meetings/service_chaining/2017/service_chaining.2017-03-16-17.02.html
[2] https://review.openstack.org/#/c/442195/
[3] 
https://github.com/openstack/networking-sfc/blob/17c537b35d41a3e1fd80da790ae668e52cea6b88/doc/source/system_design%20and_workflow.rst#usage

Best regards,
Igor.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [i18n] [nova] understanding log domain change - https://review.openstack.org/#/c/439500

2017-03-21 Thread Sean Dague
On 03/21/2017 01:06 PM, Matt Riedemann wrote:
> On 3/21/2017 2:25 AM, Akihiro Motoki wrote:
>>
>> Yes, all logging markers including LOG.exception(_LE(...)) will be
>> clean up.
>> Only user visible messages through API messages should be marked as
>> translatable.
>>
> 
> Do we still use _LE() or just _() for marking API error messages for
> translation?
> 

All of _L* no longer have any message backings, so using them just means
it's not translated at all. _() is the thing you have to use.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [i18n] [nova] understanding log domain change - https://review.openstack.org/#/c/439500

2017-03-21 Thread Matt Riedemann

On 3/21/2017 2:25 AM, Akihiro Motoki wrote:


Yes, all logging markers including LOG.exception(_LE(...)) will be clean up.
Only user visible messages through API messages should be marked as
translatable.



Do we still use _LE() or just _() for marking API error messages for 
translation?


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][tripleo] initial discussion for a new periodic pipeline

2017-03-21 Thread Ben Nemec



On 03/21/2017 11:40 AM, Wesley Hayutin wrote:



On Tue, Mar 21, 2017 at 12:03 PM, Emilien Macchi > wrote:

On Mon, Mar 20, 2017 at 3:29 PM, Paul Belanger
> wrote:
> On Sun, Mar 19, 2017 at 06:54:27PM +0200, Sagi Shnaidman wrote:
>> Hi, Paul
>> I would say that real worthwhile try starts from "normal" priority, 
because
>> we want to run promotion jobs more *often*, not more *rarely* which 
happens
>> with low priority.
>> In addition the initial idea in the first mail was running them each 
after
>> other almost, not once a day like it happens now or with "low" priority.
>>
> As I've said, my main reluctance is is how the gate will react if we 
create a
> new pipeline with the same priority as our check pipeline.  I would much 
rather
> since on caution, default to 'low', see how things react for a day / week 
/
> month, then see what it would like like a normal.  I want us to be 
caution about
> adding a new pipeline, as it dynamically changes how our existing 
pipelines
> function.
>
> Further more, this is actually a capacity issue for 
tripleo-test-cloud-rh1,
> there currently too many jobs running for the amount of hardware. If 
these jobs
> were running on our donated clouds, we could get away with a low priority
> periodic pipeline.

multinode jobs are running under donated clouds but as you know ovb not.
We want to keep ovb jobs in our promotion pipeline because they bring
high value to the tests (ironic, ipv6, ssl, probably more).

Another alternative would be to reduce it to one ovb job (ironic with
introspection + ipv6 + ssl at minimum) and use the 4 multinode jobs
into the promotion pipeline -instead of the 3 ovb.


I'm +1 on using one ovb jobs + 4 multinode jobs.


Then we lose coverage on the ipv4 net-iso case and the no net-iso case, 
both of which are very common, even if only with developers.  There's a 
reason we've always run 3 OVB jobs.


I believe we also had timeout issues in the past when trying to test all 
the things in a single periodic job.  I'm not sure if it's still an 
issue, but that's why logic like 
http://git.openstack.org/cgit/openstack-infra/tripleo-ci/tree/toci_gate_test-orig.sh#n135 
exists.


Ultimately, the problem here is not adding another handful of periodic 
jobs to rh1.  It's already running something like 750 per day, another 
~15 is not that big a deal.  But adding them as low priority jobs isn't 
going to work because of the other 750 jobs being run per day that will 
crowd them out.






current: 3 ovb jobs running every night
proposal: 18 ovb jobs per day

The addition will cost us 15 jobs into rh1 load. Would it be acceptable?

> Now, allow me to propose another solution.
>
> RDO project has their own version of zuul, which has the ability to do 
periodic
> pipelines.  Since tripleo-test-cloud-rh2 is still around, and has OVB 
ability, I
> would suggest configuring this promoting pipeline within RDO, as to not 
affect
> the capacity of tripleo-test-cloud-rh1.  This now means, you can 
continuously
> enqueue jobs at a rate of 4 hours, priority shouldn't matter as you are 
the only
> jobs running on tripleo-test-cloud-rh2, resulting in faster promotions.

Using RDO would also be an option. I'm just not sure about our
available resources, maybe other can reply on this one.


The purpose of the periodic jobs are two fold.
1. ensure the latest built packages work
2. ensure the tripleo check gates continue to work with out error

Running the promotion in review.rdoproject would not cover #2.  The
rdoproject jobs
would be configured in slightly different ways from upstream tripleo.
Running the promotion
in ci.centos has the same issue.

Using tripleo-testcloud-rh2 I think is fine.


No, it's not.  rh2 has been repurposed as a developer cloud and is 
oversubscribed as it is.  There is no more capacity in either rh1 or rh2 
at this point.


Well, strictly speaking rh1 has more capacity, but I believe we've 
reached the point of diminishing returns where adding more jobs slows 
down all the jobs, and since we keep adding slower and slower jobs that 
isn't going to work.  As it is OVB jobs are starting to timeout again 
(although there may be other factors besides load at work there - things 
are kind of a mess right now).






> This also make sense, as packaging is done in RDO, and you are triggering 
Centos
> CI things as a result.

Yes, it would make sense. Right now we have zero TripleO testing when
doing changes in RDO packages (we only run packstack and puppet jobs
which is not enough). Again, I think it's a problem of capacity here.


We made a pass at getting multinode jobs running in RDO with tripleo.
That was
initially not very successful and we chose to instead focus on

Re: [openstack-dev] [infra][tripleo] initial discussion for a new periodic pipeline

2017-03-21 Thread James Slagle
On Tue, Mar 21, 2017 at 12:40 PM, Wesley Hayutin  wrote:
> Using tripleo-testcloud-rh2 I think is fine.

I see a few folks recommending we use rh2, but AFAICT, it is already
at capacity:

[stack@undercloud ~]$ source overcloudrc
[stack@undercloud ~]$ nova hypervisor-stats
+--+-+
| Property | Value   |
+--+-+
| count| 13  |
| current_workload | 0   |
| disk_available_least | 399 |
| free_disk_gb | 306 |
| free_ram_mb  | 115992  |
| local_gb | 3534|
| local_gb_used| 6104|
| memory_mb| 963378  |
| memory_mb_used   | 1022464 |
| running_vms  | 156 |
| vcpus| 172 |
| vcpus_used   | 420 |
+--+-+

I was unable to boot a single instance a week or so ago. I think it's
all due to development environments.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-21 Thread Emilien Macchi
On Wed, Mar 15, 2017 at 3:53 PM, Paul Belanger  wrote:
> On Wed, Mar 15, 2017 at 09:41:16AM +0100, Thomas Herve wrote:
>> On Wed, Mar 15, 2017 at 12:05 AM, Joshua Harlow  
>> wrote:
>>
>> > * How does reloading work (does it)?
>>
>> No. There is nothing that we can do in oslo that will make services
>> magically reload configuration. It's also unclear to me if that's
>> something to do. In a containerized environment, wouldn't it be
>> simpler to deploy new services? Otherwise, supporting signal based
>> reload as we do today should be trivial.
>>
>> > * What's the operational experience (editing a ini file is about the lowest
>> > bar we can possible get to, for better and/or worse).
>> >
>> > * Does this need to be a new oslo.config backend or is it better suited by
>> > something like the following (external programs loop)::
>> >
>> >etcd_client = make_etcd_client(args)
>> >while True:
>> >has_changed = etcd_client.get_new_config("/blahblah") # or use a
>> > watch
>> >if has_changed:
>> >   fetch_and_write_ini_file(etcd_client)
>> >   trigger_reload()
>> >time.sleep(args.wait)
>>
>> That's confd: https://github.com/kelseyhightower/confd/ . Bonus
>> points; it supports a ton of other backends. One solution is to
>> provide templates and documentation to use confd with OpenStack.
>>
> ++
>
> Lets not get into the business of writing cfgmgmt tools in openstack, but 
> reuse
> what exists today.
>
> oslo.config should just write to etcd, and other tools would be used, confd, 
> to
> trigger things.

I agree with you and that's what we're investigating in TripleO:
http://lists.openstack.org/pipermail/openstack-dev/2017-March/114246.html

Both threads are related, I would love to hear some feedback on it or
directly in the etherpad:
https://etherpad.openstack.org/p/tripleo-etcd-transition

Thanks,

>
>> --
>> Thomas
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][tripleo] initial discussion for a new periodic pipeline

2017-03-21 Thread Wesley Hayutin
On Tue, Mar 21, 2017 at 12:03 PM, Emilien Macchi  wrote:

> On Mon, Mar 20, 2017 at 3:29 PM, Paul Belanger 
> wrote:
> > On Sun, Mar 19, 2017 at 06:54:27PM +0200, Sagi Shnaidman wrote:
> >> Hi, Paul
> >> I would say that real worthwhile try starts from "normal" priority,
> because
> >> we want to run promotion jobs more *often*, not more *rarely* which
> happens
> >> with low priority.
> >> In addition the initial idea in the first mail was running them each
> after
> >> other almost, not once a day like it happens now or with "low" priority.
> >>
> > As I've said, my main reluctance is is how the gate will react if we
> create a
> > new pipeline with the same priority as our check pipeline.  I would much
> rather
> > since on caution, default to 'low', see how things react for a day /
> week /
> > month, then see what it would like like a normal.  I want us to be
> caution about
> > adding a new pipeline, as it dynamically changes how our existing
> pipelines
> > function.
> >
> > Further more, this is actually a capacity issue for
> tripleo-test-cloud-rh1,
> > there currently too many jobs running for the amount of hardware. If
> these jobs
> > were running on our donated clouds, we could get away with a low priority
> > periodic pipeline.
>
> multinode jobs are running under donated clouds but as you know ovb not.
> We want to keep ovb jobs in our promotion pipeline because they bring
> high value to the tests (ironic, ipv6, ssl, probably more).
>
> Another alternative would be to reduce it to one ovb job (ironic with
> introspection + ipv6 + ssl at minimum) and use the 4 multinode jobs
> into the promotion pipeline -instead of the 3 ovb.
>

I'm +1 on using one ovb jobs + 4 multinode jobs.


>
> current: 3 ovb jobs running every night
> proposal: 18 ovb jobs per day
>
> The addition will cost us 15 jobs into rh1 load. Would it be acceptable?
>
> > Now, allow me to propose another solution.
> >
> > RDO project has their own version of zuul, which has the ability to do
> periodic
> > pipelines.  Since tripleo-test-cloud-rh2 is still around, and has OVB
> ability, I
> > would suggest configuring this promoting pipeline within RDO, as to not
> affect
> > the capacity of tripleo-test-cloud-rh1.  This now means, you can
> continuously
> > enqueue jobs at a rate of 4 hours, priority shouldn't matter as you are
> the only
> > jobs running on tripleo-test-cloud-rh2, resulting in faster promotions.
>
> Using RDO would also be an option. I'm just not sure about our
> available resources, maybe other can reply on this one.
>

The purpose of the periodic jobs are two fold.
1. ensure the latest built packages work
2. ensure the tripleo check gates continue to work with out error

Running the promotion in review.rdoproject would not cover #2.  The
rdoproject jobs
would be configured in slightly different ways from upstream tripleo.
Running the promotion
in ci.centos has the same issue.

Using tripleo-testcloud-rh2 I think is fine.


>
> > This also make sense, as packaging is done in RDO, and you are
> triggering Centos
> > CI things as a result.
>
> Yes, it would make sense. Right now we have zero TripleO testing when
> doing changes in RDO packages (we only run packstack and puppet jobs
> which is not enough). Again, I think it's a problem of capacity here.
>

We made a pass at getting multinode jobs running in RDO with tripleo.  That
was
initially not very successful and we chose to instead focus on upstream.
We *do*
have it on our list to gate packages from RDO builds with tripleo.  In the
short term
that gate will use rdocloud, in the long term we'd also like to gate w/
multinode nodepool jobs in RDO.



>
> Thoughts?
>
> >> Thanks
> >>
> >> On Wed, Mar 15, 2017 at 11:16 PM, Paul Belanger 
> >> wrote:
> >>
> >> > On Wed, Mar 15, 2017 at 03:42:32PM -0500, Ben Nemec wrote:
> >> > >
> >> > >
> >> > > On 03/13/2017 02:29 PM, Sagi Shnaidman wrote:
> >> > > > Hi, all
> >> > > >
> >> > > > I submitted a change: https://review.openstack.org/#/c/443964/
> >> > > > but seems like it reached a point which requires an additional
> >> > discussion.
> >> > > >
> >> > > > I had a few proposals, it's increasing period to 12 hours instead
> of 4
> >> > > > for start, and to leave it in regular periodic *low* precedence.
> >> > > > I think we can start from 12 hours period to see how it goes,
> although
> >> > I
> >> > > > don't think that 4 only jobs will increase load on OVB cloud, it's
> >> > > > completely negligible comparing to current OVB capacity and load.
> >> > > > But making its precedence as "low" IMHO completely removes any
> sense
> >> > > > from this pipeline to be, because we already run
> experimental-tripleo
> >> > > > pipeline which this priority and it could reach timeouts like 7-14
> >> > > > hours. So let's assume we ran periodic job, it's queued to run
> now 12 +
> >> > > > "low queue length" - about 20 and more hours. It's even worse than
> 

[openstack-dev] [all][ptl] Action required ! - Please submit Boston Forum sessions before April 2nd

2017-03-21 Thread Emilien Macchi
Sorry for duplicating the original e-mail from User Committee, but we
want to make sure all projects are aware about the deadline.

http://lists.openstack.org/pipermail/user-committee/2017-March/001856.html

PTLs (and everyone), please make sure topics are submitted before April 2nd.
Please let us know any question,

Thanks!
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [infra][security] Encryption in Zuul v3

2017-03-21 Thread James E. Blair
Hi,

In working on the implementation of the encrypted secrets feature of
Zuul v3, I have found some things that warrant further discussion.  It's
important to be deliberate about this and I welcome any feedback.

For reference, here is the relevant portion of the Zuul v3 spec:

http://specs.openstack.org/openstack-infra/infra-specs/specs/zuulv3.html#secrets

And here is an implementation of that:

https://review.openstack.org/#/q/status:open+topic:secrets+project:openstack-infra/zuul

The short version is that we want to allow users to store private keys
in the public git repos which Zuul uses to run jobs.  To do this, we
propose to use asymmetric cryptography (RSA) to encrypt the data.  The
specification suggests implementing PKCS#1-OAEP, a standard for
implementing RSA encryption.

Note that RSA is not able to encrypt a message longer than the key, and
PKCS#1 includes some overhead which eats into that.  If we use 4096 bit
RSA keys in Zuul, we will be able to encrypt 3760 bits (or 470 bytes) of
information.

Further, note that value only holds if we use SHA-1.  It has been
suggested that we may want to consider using SHA-256 with PKCS#1.  If we
do, we will be able to encrypt slightly less data.  However, I'm not
sure that the Python cryptography library allows this (yet?).  Also, see
this answer for why it may not be necessary to use SHA-256 (and also,
why we may want to anyway):

https://security.stackexchange.com/questions/112029/should-sha-1-be-used-with-rsa-oaep

One thing to note is that the OpenSSL CLI utility uses SHA-1.  Right
now, I have a utility script which uses that to encrypt secrets so that
it's easy for anyone to encrypt a secret without installing many
dependencies.  Switching to another hash function would probably mean we
wouldn't be able to use that anymore.  But that's also true for other
systems (see below).

In short, PKCS#1 pros: Simple, nicely packaged asymmetric encryption,
hides plaintext message length (up to its limit).  Cons: limited to 470
bytes (or less).

Generally, when faced with the prospect of encrypting longer messages,
the advice is to adopt a hybrid encryption scheme (as opposed to, say,
chaining RSA messages together, or increasing the RSA key size) which
uses symmetric encryption with a single-use key for the message and
asymmetric encryption to hide the key.  If we want Zuul to support the
encryption of longer secrets, we may want to adopt the hybrid approach.
A frequent hybrid approach is to encrypt the message with AES, and then
encrypt the AES key with RSA.

The hiera-eyaml work which originally inspired some of this is based on
PKCS#7 with AES as the cipher -- ultimately a hybrid approach.  An
interesting aspect of that implementation is that the use of PKCS#7 as a
message passing format allows for multiple possible underlying ciphers
since the message is wrapped in ASN.1 and is self-descriptive.  We might
have simply chosen to go with that except that there don't seem to be
many good options for implementing this in Python, largely because of
the nightmare that is ASN.1 parsing.

The system we have devised for including encrypted content in our YAML
files involves a YAML tag which specifies the encryption scheme.  So we
can evolve our use to add or remove systems as needed in the future.

So to break this down into a series of actionable questions:

1) Do we want a system to support encrypting longer secrets?  Our PKCS#1
system supports up to 470 bytes.  That should be sufficient for most
passwords and API keys, but unlikely to be sufficient for some
certificate related systems, etc.

2) If so, what system should we use?

   2.1a) GPG?  This has hybrid encryption and transport combined.
   Implementation is likely to be a bit awkward, probably involving
   popen to external processes.

   2.1b) RSA+AES?  This recommendation from the pycryptodome
   documentation illustrates a typical hybrid approach:
   
https://pycryptodome.readthedocs.io/en/latest/src/examples.html#encrypt-data-with-rsa
   The transport protocol would likely just be the concatenation of
   the RSA and AES encrypted data, as it is in that example.  We can
   port that example to use the python-cryptography primatives, or we
   can switch to pycryptodome and use it exactly.

   2.1c) RSA+Fernet?  We can stay closer to the friendly recipes in
   python-cryptography.  While there is no complete hybrid recipe,
   there is a symmetric recipe for "Fernet" which is essentially a
   recipe for AES encryption and transport.  We could encode the
   Fernet key with RSA and concatenate the Fernet token.
   https://github.com/fernet/spec/blob/master/Spec.md

   2.1d) NaCL?  A "sealed box" in libsodium (which underlies PyNaCL)
   would do what we want with a completely different set of
   algorithms.
   https://github.com/pyca/pynacl/issues/189

3) Do we think it is important to hide the length of the secret?  AES
will expose the approximate length of the secret up to the block size
(16 bytes).  This 

Re: [openstack-dev] [networking-sfc] About insertion modes and SFC Encapsulation

2017-03-21 Thread Duarte Cardoso, Igor
Hi Vikash,

It’s best to start with RFC 7665.

NSH decouples traffic forwarding from both the internals of packets and service 
functions. A special entity called SFF will take on that job. L2/L3 then become 
something that the SFF might have to deal with it. However, networking-sfc API 
doesn’t expose or require details about individual SFC dataplane elements such 
as the SFF… it is up to the backend/driver to know those low-level details.

NSH doesn’t classify and forward traffic itself. It’s only a header that 
identifies what and where in the chain the packet belongs to/is (plus other 
goodies such as metadata). Classifier will classify, SFF will forward.


By the way, I left a question on the tap blueprint whiteboard, I’ll copy it 
here too:
“Is there a use case for "tap chains"? I.e. not only you send traffic to your 
tap function, but then your tap function also sends traffic to a next hop too, 
so a full chain starts after traffic gets tapped at the first chain (the first 
chain also continues).”
I suppose the answer is no since you mentioned “Note - TAP SFs do not forward 
packet”, but I’m happy to hear extended info about this – from anyone reading.

Best regards,
Igor.

From: Vikash Kumar [mailto:vikash.ku...@oneconvergence.com]
Sent: Tuesday, March 21, 2017 3:32 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [networking-sfc] About insertion modes and SFC 
Encapsulation

Hi,
   Moving definition of SF from port-pair to port-pair-group looks good.
   TAP is also an insertion mode like L2/L3 but since it simplifies to keep 
'tap-enabled' field also in port-pair-group, so it should be fine from 
implementation point of view (Note - TAP SFs do not forward packet). TAP 
enabled and L2/L3 insertion mode should be mutually exclusive.
   According to IETF draft NSH can classify & forward traffic (correct ?) but 
then the draft assumes uniformity of working of devices (which IMHO refers L3) 
which doesn't cover the entire use case. Can insertion mode (L2/L3) & traffic 
encapsulation(NSH) co-exist also ?



On Mon, Mar 20, 2017 at 11:35 PM, Cathy Zhang 
> wrote:
Hi Igor,

Moving the correlation from port-pair to port-pair-group makes sense. In the 
future I think we should add all new attributes for a SF to 
port-pair-group-param.

But I think L2/L3 is different from encap type NSH or MPLS. An L3 type SF can 
support either NSH or MPLS. I would suggest the following:

port-pair-group (port-pair-group-params):
insertion-mode:
- L2
- L3 (default)
   Correlation:
- MPLS
- NSH
tap-enabled:
- False (default)
- True

Thanks,
Cathy

From: Duarte Cardoso, Igor 
[mailto:igor.duarte.card...@intel.com]
Sent: Monday, March 20, 2017 8:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [networking-sfc] About insertion modes and SFC 
Encapsulation

Hi networking-sfc,

At the latest IRC meeting [1] it was agreed to split TAP from the possible 
insertion modes (initial spec version [2]).

I took the ARs to propose coexistence of insertion modes, correlation and (now) 
a new tap-enabled attribute, and send this email about possible directions.

Here are my thoughts, let me know yours:


1.   My expectation for future PP and PPG if TAP+insertion modes go ahead 
and nothing else changes (only relevant details outlined):

port-pair (service-function-params):
correlation:
- MPLS
- None (default)
port-pair-group (port-pair-group-params):
insertion-mode:
- L2
- L3 (default)
tap-enabled:
- False (default)
- True


2.   What I propose for future PP and PPG (only relevant details outlined):

port-pair (service-function-params):

port-pair-group (port-pair-group-params):
mode:
- L2
- L3 (default)
- MPLS
- NSH
tap-enabled:
- False (default)
- True

With what’s proposed in 2.:
- every combination will be possible with no clashes and no validation required.
- port-pair-groups will always group “homogeneous” sets of port-pairs, making 
load-balacing and next-hop processing simpler and consistent.
- the “forwarding” details of a Service Function are no longer dictated both by 
port-pair and 

Re: [openstack-dev] [TripleO] Propose Attila Darazs and Gabriele Cerami for tripleo-ci core

2017-03-21 Thread Julie Pichon
On 15 March 2017 at 15:44, John Trowbridge  wrote:
> Both Attila and Gabriele have been rockstars with the work to transition
> tripleo-ci to run via quickstart, and both have become extremely
> knowledgeable about how tripleo-ci works during that process. They are
> both very capable of providing thorough and thoughtful reviews of
> tripleo-ci patches.
>
> On top of this Attila has greatly increased the communication from the
> tripleo-ci squad as the liason, with weekly summary emails of our
> meetings to this list.

+1

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra][tripleo] initial discussion for a new periodic pipeline

2017-03-21 Thread Emilien Macchi
On Mon, Mar 20, 2017 at 3:29 PM, Paul Belanger  wrote:
> On Sun, Mar 19, 2017 at 06:54:27PM +0200, Sagi Shnaidman wrote:
>> Hi, Paul
>> I would say that real worthwhile try starts from "normal" priority, because
>> we want to run promotion jobs more *often*, not more *rarely* which happens
>> with low priority.
>> In addition the initial idea in the first mail was running them each after
>> other almost, not once a day like it happens now or with "low" priority.
>>
> As I've said, my main reluctance is is how the gate will react if we create a
> new pipeline with the same priority as our check pipeline.  I would much 
> rather
> since on caution, default to 'low', see how things react for a day / week /
> month, then see what it would like like a normal.  I want us to be caution 
> about
> adding a new pipeline, as it dynamically changes how our existing pipelines
> function.
>
> Further more, this is actually a capacity issue for tripleo-test-cloud-rh1,
> there currently too many jobs running for the amount of hardware. If these 
> jobs
> were running on our donated clouds, we could get away with a low priority
> periodic pipeline.

multinode jobs are running under donated clouds but as you know ovb not.
We want to keep ovb jobs in our promotion pipeline because they bring
high value to the tests (ironic, ipv6, ssl, probably more).

Another alternative would be to reduce it to one ovb job (ironic with
introspection + ipv6 + ssl at minimum) and use the 4 multinode jobs
into the promotion pipeline -instead of the 3 ovb.

current: 3 ovb jobs running every night
proposal: 18 ovb jobs per day

The addition will cost us 15 jobs into rh1 load. Would it be acceptable?

> Now, allow me to propose another solution.
>
> RDO project has their own version of zuul, which has the ability to do 
> periodic
> pipelines.  Since tripleo-test-cloud-rh2 is still around, and has OVB 
> ability, I
> would suggest configuring this promoting pipeline within RDO, as to not affect
> the capacity of tripleo-test-cloud-rh1.  This now means, you can continuously
> enqueue jobs at a rate of 4 hours, priority shouldn't matter as you are the 
> only
> jobs running on tripleo-test-cloud-rh2, resulting in faster promotions.

Using RDO would also be an option. I'm just not sure about our
available resources, maybe other can reply on this one.

> This also make sense, as packaging is done in RDO, and you are triggering 
> Centos
> CI things as a result.

Yes, it would make sense. Right now we have zero TripleO testing when
doing changes in RDO packages (we only run packstack and puppet jobs
which is not enough). Again, I think it's a problem of capacity here.

Thoughts?

>> Thanks
>>
>> On Wed, Mar 15, 2017 at 11:16 PM, Paul Belanger 
>> wrote:
>>
>> > On Wed, Mar 15, 2017 at 03:42:32PM -0500, Ben Nemec wrote:
>> > >
>> > >
>> > > On 03/13/2017 02:29 PM, Sagi Shnaidman wrote:
>> > > > Hi, all
>> > > >
>> > > > I submitted a change: https://review.openstack.org/#/c/443964/
>> > > > but seems like it reached a point which requires an additional
>> > discussion.
>> > > >
>> > > > I had a few proposals, it's increasing period to 12 hours instead of 4
>> > > > for start, and to leave it in regular periodic *low* precedence.
>> > > > I think we can start from 12 hours period to see how it goes, although
>> > I
>> > > > don't think that 4 only jobs will increase load on OVB cloud, it's
>> > > > completely negligible comparing to current OVB capacity and load.
>> > > > But making its precedence as "low" IMHO completely removes any sense
>> > > > from this pipeline to be, because we already run experimental-tripleo
>> > > > pipeline which this priority and it could reach timeouts like 7-14
>> > > > hours. So let's assume we ran periodic job, it's queued to run now 12 +
>> > > > "low queue length" - about 20 and more hours. It's even worse than
>> > usual
>> > > > periodic job and definitely makes this change useless.
>> > > > I'd like to notice as well that those periodic jobs unlike "usual"
>> > > > periodic are used for repository promotion and their value are equal or
>> > > > higher than check jobs, so it needs to run with "normal" or even "high"
>> > > > precedence.
>> > >
>> > > Yeah, it makes no sense from an OVB perspective to add these as low
>> > priority
>> > > jobs.  Once in a while we've managed to chew through the entire
>> > experimental
>> > > queue during the day, but with the containers job added it's very
>> > unlikely
>> > > that's going to happen anymore.  Right now we have a 4.5 hour wait time
>> > just
>> > > for the check queue, then there's two hours of experimental jobs queued
>> > up
>> > > behind that.  All of which means if we started a low priority periodic
>> > job
>> > > right now it probably wouldn't run until about midnight my time, which I
>> > > think is when the regular periodic jobs run now.
>> > >
>> > Lets just give it a try? A 12 hour periodic job with low 

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-21 Thread Spyros Trigazis
IMO, coe is a little confusing. It is a term used by people related somehow
to the magnum community. When I describe to users how to use magnum,
I spent a few moments explaining what we call coe.

I prefer one of the following:
* openstack magnum cluster create|delete|...
* openstack mcluster create|delete|...
* both the above

It is very intuitive for users because, they will be using an openstack
cloud
and they will be wanting to use the magnum service. So, it only make sense
to type openstack magnum cluster or mcluster which is shorter.


On 21 March 2017 at 02:24, Qiming Teng  wrote:

> On Mon, Mar 20, 2017 at 03:35:18PM -0400, Jay Pipes wrote:
> > On 03/20/2017 03:08 PM, Adrian Otto wrote:
> > >Team,
> > >
> > >Stephen Watson has been working on an magnum feature to add magnum
> commands to the openstack client by implementing a plugin:
> > >
> > >https://review.openstack.org/#/q/status:open+project:
> openstack/python-magnumclient+osc
> > >
> > >In review of this work, a question has resurfaced, as to what the
> client command name should be for magnum related commands. Naturally, we’d
> like to have the name “cluster” but that word is already in use by Senlin.
> >
> > Unfortunately, the Senlin API uses a whole bunch of generic terms as
> > top-level REST resources, including "cluster", "event", "action",
> > "profile", "policy", and "node". :( I've warned before that use of
> > these generic terms in OpenStack APIs without a central group
> > responsible for curating the API would lead to problems like this.
> > This is why, IMHO, we need the API working group to be ultimately
> > responsible for preventing this type of thing from happening.
> > Otherwise, there ends up being a whole bunch of duplication and same
> > terms being used for entirely different things.
> >
>
> Well, I believe the name and namespaces used by Senlin is very clean.
> Please see the following outputs. All commands are contained in the
> cluster namespace to avoid any conflicts with any other projects.
>
> On the other hand, is there any document stating that Magnum is about
> providing clustering service? Why Magnum cares so much about the top
> level noun if it is not its business?
>

>From magnum's wiki page [1]:
"Magnum uses Heat to orchestrate an OS image which contains Docker
and Kubernetes and runs that image in either virtual machines or bare
metal in a *cluster* configuration."

Many services may offer clusters indirectly. Clusters is NOT magnum's focus,
but we can't refer to a collection of virtual machines or physical servers
with
another name. Bay proven to be confusing to users. I don't think that magnum
should reserve the cluster noun, even if it was available.

[1] https://wiki.openstack.org/wiki/Magnum


>
>
> $ openstack --help | grep cluster
>
>   --os-clustering-api-version 
>
>   cluster action list  List actions.
>   cluster action show  Show detailed info about the specified action.
>   cluster build info  Retrieve build information.
>   cluster check  Check the cluster(s).
>   cluster collect  Collect attributes across a cluster.
>   cluster create  Create the cluster.
>   cluster delete  Delete the cluster(s).
>   cluster event list  List events.
>   cluster event show  Describe the event.
>   cluster expand  Scale out a cluster by the specified number of nodes.
>   cluster list   List the user's clusters.
>   cluster members add  Add specified nodes to cluster.
>   cluster members del  Delete specified nodes from cluster.
>   cluster members list  List nodes from cluster.
>   cluster members replace  Replace the nodes in a cluster with
>   specified nodes.
>   cluster node check  Check the node(s).
>   cluster node create  Create the node.
>   cluster node delete  Delete the node(s).
>   cluster node list  Show list of nodes.
>   cluster node recover  Recover the node(s).
>   cluster node show  Show detailed info about the specified node.
>   cluster node update  Update the node.
>   cluster policy attach  Attach policy to cluster.
>   cluster policy binding list  List policies from cluster.
>   cluster policy binding show  Show a specific policy that is bound to
>   the specified cluster.
>   cluster policy binding update  Update a policy's properties on a
>   cluster.
>   cluster policy create  Create a policy.
>   cluster policy delete  Delete policy(s).
>   cluster policy detach  Detach policy from cluster.
>   cluster policy list  List policies that meet the criteria.
>   cluster policy show  Show the policy details.
>   cluster policy type list  List the available policy types.
>   cluster policy type show  Get the details about a policy type.
>   cluster policy update  Update a policy.
>   cluster policy validate  Validate a policy.
>   cluster profile create  Create a profile.
>   cluster profile delete  Delete profile(s).
>   cluster profile list  List profiles that meet the criteria.
>   cluster profile show  Show profile details.
>   

Re: [openstack-dev] [tripleo][diskimage-builder] Status of diskimage-builder

2017-03-21 Thread Emilien Macchi
Please vote again: https://review.openstack.org/#/c/445617/

We keep dib-utils for now until we have a plan.

On Tue, Mar 14, 2017 at 2:49 PM, Emilien Macchi  wrote:
> Here's the proposal that will move DIB to Infra umbrella:
> https://review.openstack.org/445617
>
> Let's move forward and vote on this proposal.
>
> Thanks all,
>
> On Mon, Mar 6, 2017 at 3:23 PM, Gregory Haynes  wrote:
>> On Sat, Mar 4, 2017, at 12:13 PM, Andre Florath wrote:
>>> Hello!
>>>
>>> Thanks Greg for sharing your thoughts.  The idea of splitting off DIB
>>> from OpenStack is new for me, therefore I collect some pros and
>>> cons:
>>>
>>> Stay in OpenStack:
>>>
>>> + Use available OpenStack infrastructure and methods
>>> + OpenStack should include a possibility to create images for ironic,
>>>   VMs and docker. (Yes - there are others, but DIB is the best! :-) )
>>> + Customers use DIB because it's part of OpenStack and for OpenStack
>>>   (see e.g. [1])
>>> + Popularity of OpenStack attracts more developers than a separate
>>>   project (IMHO running DIB as a separate project even lowers the low
>>>   number of contributors).
>>> + 'Short Distances' if there are special needs for OpenStack.
>>> + Some OpenStack projects use DIB - and also use internal 'knowledge'
>>>   (like build-, run- or test-dependencies) - it would be not that easy
>>>   to completely separate this in short term.
>>>
>>
>> Ah, I may have not been super clear - I definitely agree that we
>> wouldn't want to move off of being hosted by OpenStack infra (for all
>> the reasons you list). There are actually two classes of project hosted
>> by OpenStack infra - OpenStack projects and OpenStack related projects
>> which have differing requirements
>> (https://docs.openstack.org/infra/manual/creators.html#decide-status-of-your-project).
>> What I've noticed is we tend to align more with the openstack-related
>> projects in terms of what we ask for / how we develop (e.g. not
>> following the normal release cycle, not really being a 'deliverable' of
>> an OpenStack release). AIUI though the distinction of whether you're an
>> official project team or a related project just distinguishes what
>> restrictions are placed on you, not whether you can be hosted by
>> OpenStack infra.
>>
>>> As a separate project:
>>>
>>> - Possibly less organizational overhead.
>>> - Independent releases possible.
>>> - Develop / include / concentrate also for / on other non-OpenStack
>>>   based virtualization platforms (EC2, Google Cloud, ...)
>>> - Extend the use cases to something like 'DIB can install a wide range
>>>   of Linux distributions on everything you want'.
>>>   Example: DIB Element to install Raspberry Pi [2] (which is currently
>>>   not the core use-case but shows how flexible DIB is).
>>>
>>> In my opinion the '+' arguments are more important, therefore DIB
>>> should stay within OpenStack as a sub-project.  I don't really care
>>> about the master: TripleO, Infra, glance, ...
>>>
>>>
>>
>> Out of this list I think infra is really the only one which makes sense.
>> TripleO is the current setup and makes only slightly more sense than
>> Glance at this point: we'd be an odd appendage in both situations.
>> Having been in this situation for some time I tend to agree that it
>> isn't a big issue it tends to just be a mild annoyance every now and
>> then. IMO it'd be nice to resolve this issue once and for all, though
>> :).
>>
>>> I want to touch an important point: Greg you are right that there are
>>> only a very few developers contributing for DIB.  One reason
>>> is IMHO, that it is not very attractive to work on DIB; some examples:
>>>
>>> o The documentation how to set up a DIB development environment [3]
>>>   is out of date.
>>> o Testing DIB is nightmare: a developer has no chance to test
>>>   as it is done in the CI (which is currently setup by other OpenStack
>>>   projects?). Round-trip times of ~2h - and then it often fails,
>>>   because of some mirror problem...
>>> o It takes sometimes very long until a patch is reviewed and merged
>>>   (e.g. still open since 1y1d [6]; basic refactoring [7] was filed
>>>   about 9 month ago and still not in the master).
>>> o There are currently about 100 elements in DIB. Some of them are
>>>   highly hardware dependent; some are known not to work; a lot of them
>>>   need refactoring.
>>
>> I cant agree more on all of this. TBH I think working on docs is
>> probably the most effective thing someone could do with DIB ATM because,
>> as you say, that's how you enable people to contribute. The theory is
>> that this is also what helps with the review latency - ask newer
>> contributors to help with initial reviews. That being said, I'd be
>> surprised if the large contributor count grows much unless some of the
>> use cases change simply because its very much a plumbing tool for many
>> of our consumers, not something people are looking to drive feature
>> development in to.
>>
>>>

Re: [openstack-dev] [networking-sfc] About insertion modes and SFC Encapsulation

2017-03-21 Thread Vikash Kumar
Hi,

   Moving definition of SF from port-pair to port-pair-group looks good.

   TAP is also an insertion mode like L2/L3 but since it simplifies to keep
'tap-enabled' field also in port-pair-group, so it should be fine from
implementation point of view (Note - TAP SFs do not forward packet). TAP
enabled and L2/L3 insertion mode should be mutually exclusive.

   According to IETF draft NSH can classify & forward traffic (correct ?)
but then the draft assumes uniformity of working of devices (which IMHO
refers L3) which doesn't cover the entire use case. Can insertion mode
(L2/L3) & traffic encapsulation(NSH) co-exist also ?



On Mon, Mar 20, 2017 at 11:35 PM, Cathy Zhang 
wrote:

> Hi Igor,
>
>
>
> Moving the correlation from port-pair to port-pair-group makes sense. In
> the future I think we should add all new attributes for a SF to
> port-pair-group-param.
>
>
>
> But I think L2/L3 is different from encap type NSH or MPLS. An L3 type SF
> can support either NSH or MPLS. I would suggest the following:
>
>
>
> port-pair-group (port-pair-group-params):
>
> insertion-mode:
>
> - L2
>
> - L3 (default)
>
>Correlation:
>
> - MPLS
>
> - NSH
>
> tap-enabled:
>
> - False (default)
>
> - True
>
>
>
> Thanks,
>
> Cathy
>
>
>
> *From:* Duarte Cardoso, Igor [mailto:igor.duarte.card...@intel.com]
> *Sent:* Monday, March 20, 2017 8:02 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* [openstack-dev] [networking-sfc] About insertion modes and SFC
> Encapsulation
>
>
>
> Hi networking-sfc,
>
>
>
> At the latest IRC meeting [1] it was agreed to split TAP from the possible
> insertion modes (initial spec version [2]).
>
>
>
> I took the ARs to propose coexistence of insertion modes, correlation and
> (now) a new tap-enabled attribute, and send this email about possible
> directions.
>
>
>
> Here are my thoughts, let me know yours:
>
>
>
> 1.   My expectation for future PP and PPG if TAP+insertion modes go
> ahead and nothing else changes (only relevant details outlined):
>
>
>
> port-pair (service-function-params):
>
> correlation:
>
> - MPLS
>
> - None (default)
>
> port-pair-group (port-pair-group-params):
>
> insertion-mode:
>
> - L2
>
> - L3 (default)
>
> tap-enabled:
>
> - False (default)
>
> - True
>
>
>
> 2.   What I propose for future PP and PPG (only relevant details
> outlined):
>
>
>
> port-pair (service-function-params):
>
> 
>
> port-pair-group (port-pair-group-params):
>
> mode:
>
> - L2
>
> - L3 (default)
>
> - MPLS
>
> - NSH
>
> tap-enabled:
>
> - False (default)
>
> - True
>
>
>
> With what’s proposed in 2.:
>
> - every combination will be possible with no clashes and no validation
> required.
>
> - port-pair-groups will always group “homogeneous” sets of port-pairs,
> making load-balacing and next-hop processing simpler and consistent.
>
> - the “forwarding” details of a Service Function are no longer dictated
> both by port-pair and port-pair-group, but rather only by port-pair-group.
>
>
>
> Are there any use cases for having next-hop SF candidates (individual
> port-pairs) supporting different SFC Encapsulation protocols?
>
> I understand, however, that removing correlation from port-pairs might not
> be ideal given that it’s a subtractive API change.
>
>
>
> [1] http://eavesdrop.openstack.org/meetings/service_chaining/201
> 7/service_chaining.2017-03-16-17.02.html
>
> [2] https://review.openstack.org/#/c/442195/
>
> [3] https://github.com/openstack/networking-sfc/blob/17c537b35d4
> 1a3e1fd80da790ae668e52cea6b88/doc/source/system_design%
> 20and_workflow.rst#usage
>
>
>
> Best regards,
>
> Igor.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Regards,
Vikash
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ironic] Kubernetes-based long running processes

2017-03-21 Thread Yuriy Zveryanskyy

Hi.

I think to use Mistral with k8s extension for ironic use cases is not 
very good idea


because:

- Yes, Mistral can be used for executions of long-running business 
processes [1].


But business processes in Mistral are multi-step sets of abstract "jobs" 
(tasks) [1],


[2]. For ironic consoles use case long running task is set of OS 
processes which can


be started somewhere (host, container) and have all needed networking 
access.


- Due to difference mentioned above create long running task in Mistral is

over-complex for ironic use case. We should use additional "workflow" 
scripting


layer [3] and Mistral DSL [4]. K8s can be integrated with Mistral via 
custom actions


[5] or functions [6], but there is no "pure" API plugins, these 
extensions we should


use as part of scripting.

- Mistral can offload business processes to 3rd party services [1], but 
main problem


is asynchronous execution: ".. the concept of asynchronous action 
assumes that a


result won’t be known at a time when executor is running it" (therefore 
is not easy


to get task statuses for example) [7]. This increases complexity of 
scripting layer


that mentioned above.


Because of these reasons Mistral-k8s solution for ironic use cases 
(typical is consoles)


will be unclear and complex (without any technical arguments) in 
comparison with


directly usage of k8s API via client library [8].


[1] https://wiki.openstack.org/wiki/Mistral

[2] https://wiki.openstack.org/wiki/Mistral/Long_Running_Business_Process

[3] https://docs.openstack.org/developer/mistral/terminology/workflows.html

[4] https://docs.openstack.org/developer/mistral/dsl/dsl_v2.html

[5] 
https://docs.openstack.org/developer/mistral/developer/creating_custom_action.html


[6] 
https://docs.openstack.org/developer/mistral/developer/extending_yaql.html


[7] 
https://docs.openstack.org/developer/mistral/developer/asynchronous_actions.html


[8] 
https://github.com/openstack/requirements/blob/master/global-requirements.txt#L89



Yuriy Zveryanskyy


On 17.03.17 20:00, Taryma, Joanna wrote:

Thank you for this explanation, Clint.
Kuberentes gets more and more popular and it would be great if we could also 
take advantage of it. There are already projects in Openstack that have a 
mission that aligns with task scheduling, like Mistral, that could possibly 
support Kubernetes as a backend and this solution could be adopted by other 
projects. I’d rather think about enriching an existing common project with k8s 
support, than starting from scratch.
I think it’s a good idea to gather cross-project use cases and expectation to 
come up with a solution that will be adoptable by all the projects that desire 
to use while still being generic.

WRT Swift use case – I don’t see what was listed there as excluded from 
Kubernetes usage, as K8S supports also 1 time jobs [0].

Joanna

[0] https://kubernetes.io/docs/concepts/jobs/run-to-completion-finite-workloads/

On 3/16/17, 11:15 AM, "Clint Byrum"  wrote:

 Excerpts from Dean Troyer's message of 2017-03-16 12:19:36 -0500:
 > On Wed, Mar 15, 2017 at 5:28 PM, Taryma, Joanna 
 wrote:
 > > I’m reaching out to you to ask if you’re aware of any other use cases 
that
 > > could leverage such solution. If there’s a need for it in other 
project, it
 > > may be a good idea to implement this in some sort of a common place.
 >
 > Before implementing something new it would be a good exercise to have
 > a look at the other existing ways to run VMs and containers already in
 > the OpenStack ecosystem.  Service VMs are a thing, and projects like
 > Octavia are built around running inside the existing infrastructure.
 > There are a bunch of deployment projects that are also designed
 > specifically to run services with minimal base requirements.
 
 The console access bit Joanna mentioned is special in that it needs to be

 able to reach things like IPMI controllers. So that's not going to really
 be able to run on a service VM easily. It's totally doable (I think we
 could have achieved it with VTEP switches and OVN when I was playing
 with that), but I can understand why a container solution running on
 the same host as the conductor might be more desirable than service VMs.
 
 __

 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




Re: [openstack-dev] [kolla] Proposing duonghq for core

2017-03-21 Thread Michał Jastrzębski
And time is up:) Welcome Duong to core team!

On 16 March 2017 at 10:32, Dave Walker  wrote:
> +1, some great contributions.  Looking forward to having Duong on the team.
>
> --
> Kind Regards,
> Dave Walker
>
> On 15 March 2017 at 19:52, Vikram Hosakote (vhosakot) 
> wrote:
>>
>> +1  Great job Duong!
>>
>>
>>
>> Regards,
>>
>> Vikram Hosakote
>>
>> IRC:  vhosakot
>>
>>
>>
>> From: Michał Jastrzębski 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Date: Wednesday, March 08, 2017 at 11:21 PM
>> To: "OpenStack Development Mailing List (not for usage questions)"
>> 
>> Subject: [openstack-dev] [kolla] Proposing duonghq for core
>>
>>
>>
>> Hello,
>>
>>
>>
>> I'd like to start voting to include Duong (duonghq) in Kolla and
>>
>> Kolla-ansible core teams. Voting will be open for 2 weeks (ends at
>>
>> 21st of March).
>>
>>
>>
>> Consider this my +1 vote.
>>
>>
>>
>> Cheers,
>>
>> Michal
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>>
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] [keystone] [federated auth] [ocata] federated users with "admin" role not authorized for nova, cinder, neutron admin panels

2017-03-21 Thread Boris Bobrov
Hi,

Oh wow, for some reason my message was not sent to the list.

On 03/20/2017 09:03 PM, Evan Bollig PhD wrote:
> Hey Boris,
> 
> Any updates on this?
> 
> Cheers,
> -E
> --
> Evan F. Bollig, PhD
> Scientific Computing Consultant, Application Developer | Scientific
> Computing Solutions (SCS)
> Minnesota Supercomputing Institute | msi.umn.edu
> University of Minnesota | umn.edu
> boll0...@umn.edu | 612-624-1447 | Walter Lib Rm 556
> 
> 
> On Thu, Mar 9, 2017 at 4:08 PM, Evan Bollig PhD  wrote:
>> Hey Boris,
>>
>> Which mapping? Hope you were looking for the shibboleth user
>> mapping. Also, hope this is the right way to share the paste (first
>> time using this):
>> http://paste.openstack.org/show/3snCb31GRZfAuQxdRouy/

This is probably part of bug
https://bugs.launchpad.net/keystone/+bug/1589993 . I am not 100% sure
though. Could you please file new bugreport?

As for now, you could try doing auto-provisioning using new capabilities
from Ocata:
https://docs.openstack.org/developer/keystone/federation/mapping_combinations.html#auto-provisioning

>> Cheers,
>> -E
>> --
>> Evan F. Bollig, PhD
>> Scientific Computing Consultant, Application Developer | Scientific
>> Computing Solutions (SCS)
>> Minnesota Supercomputing Institute | msi.umn.edu
>> University of Minnesota | umn.edu
>> boll0...@umn.edu | 612-624-1447 | Walter Lib Rm 556
>>
>>
>> On Thu, Mar 9, 2017 at 7:50 AM, Boris Bobrov  wrote:
>>> Hi,
>>>
>>> Please paste your mapping to paste.openstack.org
>>>
>>> On 03/09/2017 02:07 AM, Evan Bollig PhD wrote:
 I am on Ocata with Shibboleth auth enabled. I noticed that Federated
 users with the admin role no longer have authorization to use the
 Admin** panels in Horizon related to Nova, Cinder and Neutron. All
 regular Identity and Project tabs function, and there are no problems
 with authorization for local admin users.

 -
 These Admin tabs work: Hypervisors, Host Aggregates, Flavors, Images,
 Defaults, Metadata, System Information

 These result in logout: Instances, Volumes, Networks, Routers, Floating IPs

 This is not present: Overview
 -

 The policies are vanilla from the CentOS/RDO openstack-dashboard RPMs:
 openstack-dashboard-11.0.0-1.el7.noarch
 python-django-horizon-11.0.0-1.el7.noarch
 python2-keystonemiddleware-4.14.0-1.el7.noarch
 python2-keystoneclient-3.10.0-1.el7.noarch
 openstack-keystone-11.0.0-1.el7.noarch
 python2-keystoneauth1-2.18.0-1.el7.noarch
 python-keystone-11.0.0-1.el7.noarch

 The errors I see in logs are similar to:

 ==> /var/log/horizon/horizon.log <==
 2017-03-07 18:24:54,961 13745 ERROR horizon.exceptions Unauthorized:
 Traceback (most recent call last):
   File 
 "/usr/share/openstack-dashboard/openstack_dashboard/dashboards/admin/floating_ips/views.py",
 line 53, in get_tenant_list
 tenants, has_more = api.keystone.tenant_list(request)
   File 
 "/usr/share/openstack-dashboard/openstack_dashboard/api/keystone.py",
 line 351, in tenant_list
 manager = VERSIONS.get_project_manager(request, admin=admin)
   File 
 "/usr/share/openstack-dashboard/openstack_dashboard/api/keystone.py",
 line 61, in get_project_manager
 manager = keystoneclient(*args, **kwargs).projects
   File 
 "/usr/share/openstack-dashboard/openstack_dashboard/api/keystone.py",
 line 170, in keystoneclient
 raise exceptions.NotAuthorized
 NotAuthorized

 Cheers,
 -E
 --
 Evan F. Bollig, PhD
 Scientific Computing Consultant, Application Developer | Scientific
 Computing Solutions (SCS)
 Minnesota Supercomputing Institute | msi.umn.edu
 University of Minnesota | umn.edu
 boll0...@umn.edu | 612-624-1447 | Walter Lib Rm 556

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [networking-sfc] About insertion modes and SFC Encapsulation

2017-03-21 Thread Duarte Cardoso, Igor
Hi Cathy,

I understand MPLS is a special protocol because:
- It allows Service Function Path identification (rfc7665) -> compatible with 
SFC Encapsulation
- It doesn't fully encapsulate the original frames -> incompatible with SFC 
Encapsulation
- Not necessary for this conversation, but also important to keep in mind: it 
can't transport additional metadata -> incompatible with SFC Encapsulation

So, I will start by discussing NSH specifically (being the fully-compatible SFC 
Encapsulation protocol). And so, the way I look at insertion modes (if split 
from correlations) is that in practice they become something I would describe 
as "SFC Proxy modes".

If a Service Function supports NSH, great, the NSH-encapsulated packets are 
fully exposed to the SFs and no "SFC Proxy mode" needs to be dictated (NSH is 
the mechanism itself). So, specifying L2 or L3 for insertion types would be of 
no meaning. At runtime and at the network forwarding level we might witness 
different ways of reaching the SFs, which could approximate L2 or L3 insertion 
types - but this isn't something to be modelled in networking-sfc's API but 
rather automatically controlled by the backend.

If a Service Function does not support NSH, we are in the presence of a legacy 
SF and so more information is needed to model how this SF expects packets 
(since there is no standard way). Consequently, specifying the insertion types, 
such as L2 or L3, is important. For the former, it means the SF machine has its 
interfaces running in promiscuous mode and is similar to a switch, for the 
latter it means the SF machine's interfaces are not in promiscuous mode and it 
is similar to a router.

With NSH, these insertion mode details are abstracted from the SFs.
The networking backend of neutron/networking-sfc will already know where each 
VM is and how to reach them and will  be responsible for making sure the NSH 
packet is delivered to the correct hop without needing additional information 
(from the networking-sfc API).

So, in summary, L2, L3, NSH and (in practice today @ networking-sfc) MPLS, are 
all mutually exclusive.

Best regards,
Igor.

From: Cathy Zhang [mailto:cathy.h.zh...@huawei.com]
Sent: Monday, March 20, 2017 6:05 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [networking-sfc] About insertion modes and SFC 
Encapsulation

Hi Igor,

Moving the correlation from port-pair to port-pair-group makes sense. In the 
future I think we should add all new attributes for a SF to 
port-pair-group-param.

But I think L2/L3 is different from encap type NSH or MPLS. An L3 type SF can 
support either NSH or MPLS. I would suggest the following:

port-pair-group (port-pair-group-params):
insertion-mode:
- L2
- L3 (default)
   Correlation:
- MPLS
- NSH
tap-enabled:
- False (default)
- True

Thanks,
Cathy

From: Duarte Cardoso, Igor [mailto:igor.duarte.card...@intel.com]
Sent: Monday, March 20, 2017 8:02 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [networking-sfc] About insertion modes and SFC 
Encapsulation

Hi networking-sfc,

At the latest IRC meeting [1] it was agreed to split TAP from the possible 
insertion modes (initial spec version [2]).

I took the ARs to propose coexistence of insertion modes, correlation and (now) 
a new tap-enabled attribute, and send this email about possible directions.

Here are my thoughts, let me know yours:


1.   My expectation for future PP and PPG if TAP+insertion modes go ahead 
and nothing else changes (only relevant details outlined):

port-pair (service-function-params):
correlation:
- MPLS
- None (default)
port-pair-group (port-pair-group-params):
insertion-mode:
- L2
- L3 (default)
tap-enabled:
- False (default)
- True


2.   What I propose for future PP and PPG (only relevant details outlined):

port-pair (service-function-params):

port-pair-group (port-pair-group-params):
mode:
- L2
- L3 (default)
- MPLS
- NSH
tap-enabled:
- False (default)
- True

With what's proposed in 2.:
- every combination will be possible with no clashes and no validation required.
- port-pair-groups will always group "homogeneous" sets of port-pairs, making 
load-balacing and next-hop 

Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-21 Thread Jay Pipes

On 03/20/2017 09:24 PM, Qiming Teng wrote:

On Mon, Mar 20, 2017 at 03:35:18PM -0400, Jay Pipes wrote:

On 03/20/2017 03:08 PM, Adrian Otto wrote:

Team,

Stephen Watson has been working on an magnum feature to add magnum commands to 
the openstack client by implementing a plugin:

https://review.openstack.org/#/q/status:open+project:openstack/python-magnumclient+osc

In review of this work, a question has resurfaced, as to what the client 
command name should be for magnum related commands. Naturally, we’d like to 
have the name “cluster” but that word is already in use by Senlin.


Unfortunately, the Senlin API uses a whole bunch of generic terms as
top-level REST resources, including "cluster", "event", "action",
"profile", "policy", and "node". :( I've warned before that use of
these generic terms in OpenStack APIs without a central group
responsible for curating the API would lead to problems like this.
This is why, IMHO, we need the API working group to be ultimately
responsible for preventing this type of thing from happening.
Otherwise, there ends up being a whole bunch of duplication and same
terms being used for entirely different things.



Well, I believe the name and namespaces used by Senlin is very clean.


Note that above I referred to the Senlin *API*:

https://developer.openstack.org/api-ref/clustering/

The use of generic terms like "cluster", "node", "policy", "profile", 
"action", and "event" as *top-level resources in the REST API* are what 
I was warning about.



Please see the following outputs. All commands are contained in the
cluster namespace to avoid any conflicts with any other projects.


Right, but I was talking about the REST API.


On the other hand, is there any document stating that Magnum is about
providing clustering service?


What exactly is a clustering service?

I mean, Galera has a clustering service. Pacemaker has a clustering 
service. k8s has a clustering service. etcd has a clustering service. 
Zookeeper has a clustering service.


Senlin is an API that allows a user to group *virtual machines* together 
and expand or shrink that group of VMs. It's basically the old Heat 
autoscaling API done properly. There's a *lot* to like about Senlin's 
API and implementation.


However, it would have been more appropriate (and forward-looking) to 
call Senlin's namespace "instance group" or "server group" than the 
generic term "cluster".


>  Why Magnum cares so much about the top

level noun if it is not its business?


Because Magnum uses the term "cluster" as a top-level resource in its 
own REST API:


http://git.openstack.org/cgit/openstack/magnum/tree/magnum/api/controllers/v1/cluster.py

The generic term "cluster" that Magnum uses should really be called "coe 
group" or "container engine group" or "container service group" or 
something like that, to better indicate what exactly is being operated on.


Best,
-jay


$ openstack --help | grep cluster

  --os-clustering-api-version 

  cluster action list  List actions.
  cluster action show  Show detailed info about the specified action.
  cluster build info  Retrieve build information.
  cluster check  Check the cluster(s).
  cluster collect  Collect attributes across a cluster.
  cluster create  Create the cluster.
  cluster delete  Delete the cluster(s).
  cluster event list  List events.
  cluster event show  Describe the event.
  cluster expand  Scale out a cluster by the specified number of nodes.
  cluster list   List the user's clusters.
  cluster members add  Add specified nodes to cluster.
  cluster members del  Delete specified nodes from cluster.
  cluster members list  List nodes from cluster.
  cluster members replace  Replace the nodes in a cluster with
  specified nodes.
  cluster node check  Check the node(s).
  cluster node create  Create the node.
  cluster node delete  Delete the node(s).
  cluster node list  Show list of nodes.
  cluster node recover  Recover the node(s).
  cluster node show  Show detailed info about the specified node.
  cluster node update  Update the node.
  cluster policy attach  Attach policy to cluster.
  cluster policy binding list  List policies from cluster.
  cluster policy binding show  Show a specific policy that is bound to
  the specified cluster.
  cluster policy binding update  Update a policy's properties on a
  cluster.
  cluster policy create  Create a policy.
  cluster policy delete  Delete policy(s).
  cluster policy detach  Detach policy from cluster.
  cluster policy list  List policies that meet the criteria.
  cluster policy show  Show the policy details.
  cluster policy type list  List the available policy types.
  cluster policy type show  Get the details about a policy type.
  cluster policy update  Update a policy.
  cluster policy validate  Validate a policy.
  cluster profile create  Create a profile.
  cluster profile delete  Delete profile(s).
  cluster profile list  List profiles that meet the criteria.
  cluster 

[openstack-dev] [tripleo][squad:containers] Updates on the CI status for the container-based deployment

2017-03-21 Thread Flavio Percoco

Greetings,

The containers squad has worked a bit on the CI jobs lately and I'd like to
provide some updates. There are 2 containers related jobs now:

1) undercloud-containers
2) containers-oooq-nv

The first job tests a containerized undercloud deployment (experimental). The
second one tests a containerized overcloud deployment and it's currently
non-voting.

None of these jobs should be ignored, despite of their current status. The first
one is hopefully going to be moved to the check pipeline as soon as one of the
missing requirements is covered (better logs collections). The second job is
actually quite stable now except for the few occasional timeouts we've hit. Once
the issue with the timeouts is solved, we'll be making this job voting.

Until the above happens, we kindly ask all the tripleo reviewers to not ignore
these jobs.

In addition to the above mentioned jobs, the team is also working on a job for
upgrades. This job will specifically test the upgrade from baremetal deployments
to containerized deployments. The details of this job are still being worked
out, though.

On behalf of the tripleo containers squad,
Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][barbican][castellan] Proposal to rename Castellan to oslo.keymanager

2017-03-21 Thread Flavio Percoco

On 16/03/17 12:43 -0400, Davanum Srinivas wrote:

+1 from me to bring castellan under Oslo governance with folks from
both oslo and Barbican as reviewers without a project rename. Let's
see if that helps get more adoption of castellan


This sounds like a great path forward! +1

Flavio


Thanks,
Dims

On Thu, Mar 16, 2017 at 12:25 PM, Farr, Kaitlin M.
 wrote:

This thread has generated quite the discussion, so I will try to
address a few points in this email, echoing a lot of what Dave said.

Clint originally explained what we are trying to solve very well. The hope was
that the rename would emphasize that Castellan is just a basic
interface that supports operations common between key managers
(the existing Barbican back end and other back ends that may exist
in the future), much like oslo.db supports the common operations
between PostgreSQL and MySQL. The thought was that renaming to have
oslo part of the name would help reinforce that it's just an interface,
rather than a standalone key manager. Right now, the only Castellan
back end that would work in DevStack is Barbican. There has been talk
in the past for creating other Castellan back ends (Vault or Tang), but
no one has committed to writing the code for those yet.

The intended proposal was to rename the project, maintain the current
review team (which is only a handful of Barbican people), and bring on
a few Oslo folks, if any were available and interested, to give advice
about (and +2s for) OpenStack library best practices. However, perhaps
pulling it under oslo's umbrella without a rename is blessing it enough.

In response to Julien's proposal to make Castellan "the way you can do
key management in Python" -- it would be great if Castellan were that
abstract, but in practice it is pretty OpenStack-specific. Currently,
the Barbican team is great at working on key management projects
(including both Barbican and Castellan), but a lot of our focus now is
how we can maintain and grow integration with the rest of the OpenStack
projects, for which having the name and expertise of oslo would be a
great help.

Thanks,

Kaitlin
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [horizon] [keystone] [federated auth] [ocata] federated users with "admin" role not authorized for nova, cinder, neutron admin panels

2017-03-21 Thread Boris Bobrov
Hi,

Oh wow, for some reason my message was not sent to the list.

On 03/20/2017 09:03 PM, Evan Bollig PhD wrote:
> Hey Boris,
> 
> Any updates on this?
> 
> Cheers,
> -E
> --
> Evan F. Bollig, PhD
> Scientific Computing Consultant, Application Developer | Scientific
> Computing Solutions (SCS)
> Minnesota Supercomputing Institute | msi.umn.edu
> University of Minnesota | umn.edu
> boll0...@umn.edu | 612-624-1447 | Walter Lib Rm 556
> 
> 
> On Thu, Mar 9, 2017 at 4:08 PM, Evan Bollig PhD  wrote:
>> Hey Boris,
>>
>> Which mapping? Hope you were looking for the shibboleth user
>> mapping. Also, hope this is the right way to share the paste (first
>> time using this):
>> http://paste.openstack.org/show/3snCb31GRZfAuQxdRouy/

This is probably part of bug
https://bugs.launchpad.net/keystone/+bug/1589993 . I am not 100% sure
though. Could you please file new bugreport?

As for now, you could try doing auto-provisioning using new capabilities
from Ocata:
https://docs.openstack.org/developer/keystone/federation/mapping_combinations.html#auto-provisioning

>> Cheers,
>> -E
>> --
>> Evan F. Bollig, PhD
>> Scientific Computing Consultant, Application Developer | Scientific
>> Computing Solutions (SCS)
>> Minnesota Supercomputing Institute | msi.umn.edu
>> University of Minnesota | umn.edu
>> boll0...@umn.edu | 612-624-1447 | Walter Lib Rm 556
>>
>>
>> On Thu, Mar 9, 2017 at 7:50 AM, Boris Bobrov  wrote:
>>> Hi,
>>>
>>> Please paste your mapping to paste.openstack.org
>>>
>>> On 03/09/2017 02:07 AM, Evan Bollig PhD wrote:
 I am on Ocata with Shibboleth auth enabled. I noticed that Federated
 users with the admin role no longer have authorization to use the
 Admin** panels in Horizon related to Nova, Cinder and Neutron. All
 regular Identity and Project tabs function, and there are no problems
 with authorization for local admin users.

 -
 These Admin tabs work: Hypervisors, Host Aggregates, Flavors, Images,
 Defaults, Metadata, System Information

 These result in logout: Instances, Volumes, Networks, Routers, Floating IPs

 This is not present: Overview
 -

 The policies are vanilla from the CentOS/RDO openstack-dashboard RPMs:
 openstack-dashboard-11.0.0-1.el7.noarch
 python-django-horizon-11.0.0-1.el7.noarch
 python2-keystonemiddleware-4.14.0-1.el7.noarch
 python2-keystoneclient-3.10.0-1.el7.noarch
 openstack-keystone-11.0.0-1.el7.noarch
 python2-keystoneauth1-2.18.0-1.el7.noarch
 python-keystone-11.0.0-1.el7.noarch

 The errors I see in logs are similar to:

 ==> /var/log/horizon/horizon.log <==
 2017-03-07 18:24:54,961 13745 ERROR horizon.exceptions Unauthorized:
 Traceback (most recent call last):
   File 
 "/usr/share/openstack-dashboard/openstack_dashboard/dashboards/admin/floating_ips/views.py",
 line 53, in get_tenant_list
 tenants, has_more = api.keystone.tenant_list(request)
   File 
 "/usr/share/openstack-dashboard/openstack_dashboard/api/keystone.py",
 line 351, in tenant_list
 manager = VERSIONS.get_project_manager(request, admin=admin)
   File 
 "/usr/share/openstack-dashboard/openstack_dashboard/api/keystone.py",
 line 61, in get_project_manager
 manager = keystoneclient(*args, **kwargs).projects
   File 
 "/usr/share/openstack-dashboard/openstack_dashboard/api/keystone.py",
 line 170, in keystoneclient
 raise exceptions.NotAuthorized
 NotAuthorized

 Cheers,
 -E
 --
 Evan F. Bollig, PhD
 Scientific Computing Consultant, Application Developer | Scientific
 Computing Solutions (SCS)
 Minnesota Supercomputing Institute | msi.umn.edu
 University of Minnesota | umn.edu
 boll0...@umn.edu | 612-624-1447 | Walter Lib Rm 556

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ceilometer][powervm] pollster interface changes

2017-03-21 Thread gordon chung
hi,

this is just a headsup to the powervm team regarding potentially 
(probably) breaking changes to pollster interface in ceilometer.

in an effort to streamline the interface so that it's less 
verbose/hacky, we have a change up that will allow us to remove the need 
for passing around a cache object but will break the interface.[1]  

you'll probably need to adapt to the new interface if powervm support is 
still required.

let us know if you have any concerns.

[1] https://review.openstack.org/#/c/445976/

cheers,
-- 
gord
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Propose Attila Darazs and Gabriele Cerami for tripleo-ci core

2017-03-21 Thread Brent Eagles
On Wed, Mar 15, 2017 at 1:14 PM, John Trowbridge  wrote:

> Both Attila and Gabriele have been rockstars with the work to transition
> tripleo-ci to run via quickstart, and both have become extremely
> knowledgeable about how tripleo-ci works during that process. They are
> both very capable of providing thorough and thoughtful reviews of
> tripleo-ci patches.
>
> On top of this Attila has greatly increased the communication from the
> tripleo-ci squad as the liason, with weekly summary emails of our
> meetings to this list.
>
> - trown
>

​+1.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-21 Thread Anne Gentle
On Mon, Mar 20, 2017 at 4:38 PM, Dean Troyer  wrote:

> On Mon, Mar 20, 2017 at 4:36 PM, Adrian Otto 
> wrote:
> > So, to be clear, this would result in the following command for what we
> currently use “magnum cluster create” for:
> >
> > openstack coe cluster create …
> >
> > Is this right?
>
> Yes.
>
>
This looks good to me as an OSC user.

One other question, I honestly can't remember if the projects.yaml name
needs to match the service catalog name? Might be a good time to synch
everything if so. Right now, it's "Container Infrastructure Management
service" and could be Container Orchestration Engine Management service.

Naming, it's hard.
Anne


> dt
>
> --
>
> Dean Troyer
> dtro...@gmail.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

Read my blog: justwrite.click 
Subscribe to Docs|Code: docslikecode.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [magnum][osc] What name to use for magnum commands in osc?

2017-03-21 Thread Monty Taylor
On 03/20/2017 08:16 PM, Dean Troyer wrote:
> On Mon, Mar 20, 2017 at 5:52 PM, Monty Taylor  wrote:
>>> [Hongbin Lu]
>>> I think the style would be more consistent if all the resources are 
>>> qualified or un-qualified, not the mix of both.
> 
>> So - swift got here first, it wins, it gets container. The fine folks in
>> barbican, rather than calling a thing a container and then needing to
>> call it a secret-container - maybe could call their thing a vault or a
>> locker or a safe or a lockbox or an oubliette. (for instance)
> 
> Right, there _were_ only 5 projects when we started this and we
> re-used most of the original project-specific names.  Swift is a
> particularly fun one because both 'container' and 'object' are
> extrement useful in that context, but both are also extremely generic,
> and 'object container', well, what is that?
> 
>> I do not have any suggestions for things that actually return a resource
>> that are a single "linux container" - since swift called their thing a
>> container before docker was written and popularized the word to mean
>> something different. We might just get to be fun and different - sort of
>> like how Emacs calls cut/paste "kill" and "yank" (if you're not an Emacs
>> user, you "kill" text into the kill ring and then you "yank" from the
>> ring into the current document.
> 
> Monty, grab your Tardis and follow me around the Austin summit and
> listen to the opinions I get for doing things like this :)

Which Austin summit - haven't we been at two together now?. ;)

>> OTOH, I think Dean has talked about more verbose terms and then aliases
>> for backwards compat. So maybe a swift container is always an
>> "object_container" - but because of history it gets to also be
>> unqualified "container" - but then we could have "object container" and
>> "secret container" and "linux container" ... similarly we could have
>> "server flavor" and "volume flavor" ... etc.
> 
> Yes, we do have plans to go back and qualify some of these resource
> names to be consistent, but the current names will probably never
> change, we'll just have the qualified names for those who prefer to
> use them.
> 
> Flavor is my favorite example of this as we add network flavor, and
> others.  It also illustrates the 'it isn't a namespace' as it will
> become 'server flavor' rather than 'compute flavor'.

Yes - that's an excellent example.

I think one of the most important thing to realize is that our project
organization is much less interesting to our API consumers than it is to
developers and operators. _especially_ when some things move their
project home over time. (is it compute floating-ip? is it network
floating-ip?) And that a single project could have more than one thing
that is similar in different contexts (we have both a ComputeUsage and a
ServerUsage - with ServerUsage being the usage for a specific server
while ComputeUsage is the aggregate compute usage for a project)

Yay naming!

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-docs] [tripleo] Creating official Deployment guide for TripleO

2017-03-21 Thread Alexandra Settle
Thanks for volunteering everyone :)

Carlos – The Contributor Guide link that Emilien shared has a list of the tasks 
(step-by-step process) that need to be done. The only other thing you might 
need to elaborate on is potential for editing and ensuring your guide is 
up-to-date and fully deployable.

The other thing is, the project deployment guides tend to focus entirely on a 
full deployment (rather than an AIO, or upgrade content). Although I see no 
reason why this scope can’t expand – just letting you know what is currently 
added/being added to the section :)

Cheers,

Alex

From: Carlos Camacho Gonzalez 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Monday, March 20, 2017 at 3:21 PM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [openstack-docs] [tripleo] Creating official 
Deployment guide for TripleO

Hey,

I'll like to collaborate, please, just let me know what can I do to help with 
this task.

Might be a good idea to have in the blueprint a list of tasks?

Also, I think this can be called Deployment/Upgrade guide for TripleO.

Cheers,
Carlos.



On Mon, Mar 20, 2017 at 3:26 PM, Sanjay Upadhyay 
> wrote:


On Mon, Mar 20, 2017 at 5:31 PM, Emilien Macchi 
> wrote:
I proposed a blueprint to track the work done:

https://blueprints.launchpad.net/tripleo/+spec/tripleo-deploy-guide
Target: pike-3

Volunteers to work on it with me, please let me know.

Please add me (irc handle - saneax), I am interested on this.

regards
/sanjay


Thanks,

On Tue, Mar 14, 2017 at 7:00 AM, Alexandra Settle 
> wrote:
> Hey Emilien,
>
> You pretty much covered it all! Docs team is happy to provide guidance, but 
> in reality, it should be a fairly straight forward process.
>
> The Kolla team just completed their deploy-guide patches and were able to 
> help refine the process a bit further. Hopefully this should help the TripleO 
> team :)
>
> Reach out if you have any questions at all :)
>
> Thanks,
>
> Alex
>
> On 3/13/17, 10:32 PM, "Emilien Macchi" 
> > wrote:
>
> Team,
>
> [adding Alexandra, OpenStack Docs PTL]
>
> It seems like there is a common interest in pushing deployment guides
> for different OpenStack Deployment projects: OSA, Kolla.
> The landing page is here:
> https://docs.openstack.org/project-deploy-guide/newton/
>
> And one example:
> https://docs.openstack.org/project-deploy-guide/openstack-ansible/newton/
>
> I think this is pretty awesome and it would bring more visibility for
> TripleO project, and help our community to find TripleO documentation
> from a consistent place.
>
> The good news, is that openstack-docs team built a pretty solid
> workflow to make that happen:
> https://docs.openstack.org/contributor-guide/project-deploy-guide.html
> And we don't need to create new repos or do any crazy changes. It
> would probably be some refactoring and sphinx things.
>
> Alexandra, please add any words if I missed something obvious.
>
> Feedback from the team would be welcome here before we engage any work,
>
> Thanks!
> --
> Emilien Macchi
>
>



--
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mogan][valence] Valence integration

2017-03-21 Thread Zhenguo Niu
hi guys,

Here is a spec about Mogan and Valence integration[1], but before this
happen, I would like to know what information needed when requesting to
compose a node through Valence. From the API doc[2], I can only find name
and description parameters, but seems like it's incorrect, I suppose that
it should at least include cpus, ram, disk or maybe cpuinfo. We need to
align with this before introducing a new flavor for both RSD nodes and
generic nodes.


[1] https://review.openstack.org/#/c/441790/
[2]
https://github.com/openstack/valence/blob/master/api-ref/source/valence-api-v1-nodes.inc#request

-- 
Best Regards,
Zhenguo Niu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] Newton: not able to login via public key

2017-03-21 Thread Kevin Benton
Nova API. Neutron just relays the metadata requests to Nova.

On Sun, Mar 5, 2017 at 8:53 AM, Amit Uniyal  wrote:

> Hi Kevin,
>
>
> Thanks for response.
>
> Can you tell which service or which configuration(file) is responsible for
> adding metadata to instance. like adding adding keys in new instance ?
>
>
> Thanks and Regards
> Amit Uniyal
>
> On Sun, Mar 5, 2017 at 8:18 PM, Kevin Benton  wrote:
>
>> The metadata agent in Neutron is just a proxy that relays metadata
>> requests to Nova after adding in HTTP headers that identify the
>> instance.
>>
>> On Sun, Mar 5, 2017 at 5:44 AM, Amit Uniyal  wrote:
>> > Hi all,
>> >
>> > I have reconfigured everything, working fine but not sure where what
>> went
>> > wrong last time,
>> >
>> > Can anyone explain, how this works, like metadata agent is neutron
>> service,
>> > is it responsible for adding key inside new instance? It should be a
>> job of
>> > nova service.
>> >
>> >
>> > Thanks and Regards
>> > Amit Uniyal
>> >
>> > On Wed, Mar 1, 2017 at 11:03 PM, Amit Uniyal 
>> wrote:
>> >>
>> >> Hi all,
>> >>
>> >> I have installed a newton openstack, not able to login into machines
>> via
>> >> private keys.
>> >>
>> >> I followed this guide
>> >> https://docs.openstack.org/newton/install-guide-ubuntu/
>> >>
>> >> Configure the metadata agent¶
>> >>
>> >> The metadata agent provides configuration information such as
>> credentials
>> >> to instances.
>> >>
>> >> Edit the /etc/neutron/metadata_agent.ini file and complete the
>> following
>> >> actions:
>> >>
>> >> In the [DEFAULT] section, configure the metadata host and shared
>> secret:
>> >>
>> >> [DEFAULT]
>> >> ...
>> >> nova_metadata_ip = controller
>> >> metadata_proxy_shared_secret = METADATA_SECRET
>> >>
>> >> Replace METADATA_SECRET with a suitable secret for the metadata proxy.
>> >>
>> >>
>> >>
>> >>
>> >> I think region name should also be included here, I tried
>> >>
>> >> RegionName = RegionOne
>> >>
>> >> and then restarted even whole controller node (as it doesn't work by
>> only
>> >> restarting neutron meta-agent service)
>> >>
>> >>
>> >> Another thing is on checking neutron agent-list status, I am not
>> getting
>> >> any availiability zone for mata-agent is it fine?
>> >>
>> >>
>> >> Regards
>> >> Amit
>> >>
>> >>
>> >>
>> >>
>> >>
>> >
>> >
>> > ___
>> > Mailing list: http://lists.openstack.org/cgi
>> -bin/mailman/listinfo/openstack
>> > Post to : openst...@lists.openstack.org
>> > Unsubscribe : http://lists.openstack.org/cgi
>> -bin/mailman/listinfo/openstack
>> >
>>
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] - adjusting Monday IRC meeting time and drivers meeting time

2017-03-21 Thread Kevin Benton
Hi everyone,

The recent DST switch has caused several conflicts for the Monday IRC
meeting time and the drivers meeting time.

I am going to adjust the Monday meeting time to 1 hour earlier[1] and the
drivers meeting time to 6 hours earlier to (1600 UTC).

The Monday meeting will now be on openstack-meeting-4 to work around other
conflicts!

https://review.openstack.org/447961

Cheers,
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer]Can't find meter anywhere with ceilometer post REST API

2017-03-21 Thread Hui Xiang
gordon,

  Thanks much, it works by adding below section in pipeline.yaml

- name: collectd_source
  interval: 60
  meters:
  - "load.load"
  - "memory.memory"
  - "interface.if_dropped"
  - "interface.if_errors"

- name: collectd_sink
  transformers:
  publishers:
  - notifier://


   Another question, what is the difference between ceilometer backend
database with the database:// configured in publishers? If I didn't set
'backend=xx' in ceilometer.conf but set
'connection=mongodb://user:password@ip_address/ceilometer', seems it
connected to mongodb other than any backend options(sqlalchemy, mysql,etc),
does the database:// configured in publishers should be actually
'mysql://user:password@ip_address/ceilometer' ?


Thanks.
Hui.



On Tue, Mar 21, 2017 at 11:40 AM, Hui Xiang  wrote:

> Thanks gordon for your info.
>
> The reason why not using gnocchi in mitaka is that we are using
> collectd-ceilometer-plugin[1] to posting samples to ceilometer through
> ceilometer-api, after mitaka yes we will all move to gnocchi.
>
>
> """
> when posting samples to ceilometer-api, the data goes through
> pipeline before being stored. therefore, you need notification-agent
> enabled AND you need to make sure the pipeline.yaml accepts the meter.
> """
> As the samples posted doesn't have event_type, so I guess you mean I don't
> need to edit the event_pipeline.yaml, but need to edit the pipeline.yaml to
> accepts the meter. Could you kindly check whether below simple example make
> sense to accept the meter?  Does the source name need to match the source
> field in the sample or it can be defined as anyone.
>
> > [{"counter_name": "interface.if_errors",
> >   "user_id": "5457b977c25e4498a31a3c1c78829631",
> >   "resource_id": "localhost-ovs-system",
> >   "timestamp": "2017-03-17T02:26:46",
> >   "resource_metadata": {},
> >   "source": *"5b1525a8eb2d4739a83b296682aed023:collectd*",
> >   "counter_unit": "Errors/s",
> >   "counter_volume": 0.0,
> >   "project_id": "5b1525a8eb2d4739a83b296682aed023",
> >   "message_id": "2b4ce294-0ab9-11e7-8058-026ea687824d",
> >   "counter_type": "delta"},
> >
>
>
> sources:
> - name: *meter_source*
>   interval: 60
>   meters:
>   - "interface.if_errors"
>   sinks:
>   - meter_sink
>
> sinks:
> - name: meter_sink
>   transformers:
>   publishers:
>   - notifier://
>
>
> Does the source name need to matching the source field in the sample or it
> can be defined as any.
>
> [1]. https://github.com/openstack/collectd-ceilometer-plugin
>
>
> Thanks.
> Hui.
>
>
> On Tue, Mar 21, 2017 at 4:21 AM, gordon chung  wrote:
>
>>
>>
>> On 18/03/17 04:54 AM, Hui Xiang wrote:
>> > Hi folks,
>> >
>> >   I am trying to post samples from third part software to ceilometer via
>> > the REST API as below with Mitaka version. I can see ceilometer-api has
>> > received this post, and seems forwarded to ceilometer notification agent
>> > through RMQ.
>> >
>>
>> first and most importantly, the ceilometer-api is deprecated and not
>> supported upstream anymore. please use gnocchi for proper time series
>> storage (or whatever storage solution you feel comfortable with)
>>
>> >
>> > 2. LOG
>> > 56:17] "*POST /v2/meters/interface.if_packets HTTP/1.1*" 201 -
>> > 2017-03-17 16:56:17.378 52955 DEBUG oslo_messaging._drivers.amqpdriver
>> > [req-1c4ea84d-ea53-4518-81ea-6c0bffa9745d
>> > 5457b977c25e4498a31a3c1c78829631 5b1525a8eb2d4739a83b296682aed023 - -
>> -]
>> > CAST unique_id: 64a6bae3bbcc4b7dab4dceb13cf7f81b NOTIFY exchange
>> > 'ceilometer' topic 'notifications.sample' _send
>> > /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/
>> amqpdriver.py:438
>> > 2017-03-17 16:56:17.382 52955 INFO werkzeug
>> > [req-1c4ea84d-ea53-4518-81ea-6c0bffa9745d
>> > 5457b977c25e4498a31a3c1c78829631 5b1525a8eb2d4739a83b296682aed023 - -
>> -]
>> > 192.168.0.3 - - [17/Mar/2017
>> >
>> >
>> > 3. REST API return result
>> > [{"counter_name": "interface.if_errors",
>> >   "user_id": "5457b977c25e4498a31a3c1c78829631",
>> >   "resource_id": "localhost-ovs-system",
>> >   "timestamp": "2017-03-17T02:26:46",
>> >   "resource_metadata": {},
>> >   "source": "5b1525a8eb2d4739a83b296682aed023:collectd",
>> >   "counter_unit": "Errors/s",
>> >   "counter_volume": 0.0,
>> >   "project_id": "5b1525a8eb2d4739a83b296682aed023",
>> >   "message_id": "2b4ce294-0ab9-11e7-8058-026ea687824d",
>> >   "counter_type": "delta"},
>> >
>>
>> when posting samples to ceilometer-api, the data goes through pipeline
>> before being stored. therefore, you need notification-agent enabled AND
>> you need to make sure the pipeline.yaml accepts the meter.
>>
>> --
>> gord
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> 

Re: [openstack-dev] [vitrage] vitrage Resource API

2017-03-21 Thread Afek, Ifat (Nokia - IL/Kfar Sava)
Hi dwj,

The resource API is something that we planned on implementing, but eventually 
didn’t do that.

Best Regards,
Ifat.

From: "dong.wenj...@zte.com.cn" 
Date: Tuesday, 21 March 2017 at 02:45
To: "trinath.soman...@nxp.com" 
Cc: "openstack-dev@lists.openstack.org" , 
"Afek, Ifat (Nokia - IL/Kfar Sava)" 
Subject: Re: [openstack-dev] [vitrage] vitrage Resource API


No, in the implemention of these APIs.

see 
https://github.com/openstack/vitrage/blob/master/vitrage/api/controllers/v1/resource.py#L47






Original Mail
Sender:  <trinath.soman...@nxp.com>;
To:  <openstack-dev@lists.openstack.org>; <openstack-dev@lists.openstack.org>;
Date: 2017/03/21 00:50
Subject: Re: [openstack-dev] [vitrage] vitrage Resource API


In tests?
Get Outlook for iOS


From: dong.wenj...@zte.com.cn <dong.wenj...@zte.com.cn>
Sent: Monday, March 20, 2017 2:49:57 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [vitrage] vitrage Resource API


Hi All,



I noticed that the APIs of `resource list` and `resource show`  were mocked.

Is  there any backgroud for the mock or the API is not necessary?



BR,

dwj








__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron][stadium] - abandoned patches and bug triage

2017-03-21 Thread Kevin Benton
Hi everyone,

You may have noticed I ran the auto abandon script to clean up some of our
back-log.
In addition to abandoning patches, I adjusted it to add a tag to the bug
and mark it New again so we can triage it.

If you had a patch abandoned, feel free to restore it right away if you are
working on it. This is just an effort to clear out stale work to help
prioritize active reviews.

Please help triage the bugs that were flipped back to the new status with
this query:
https://bugs.launchpad.net/neutron/+bugs?field.tag=timeout-abandon

Cheers,
Kevin Benton
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALU] Re: [vitrage] vitrage Resource API

2017-03-21 Thread dong.wenjuan
Hi Alexey,

Thanks for letting me know.

I'll continue to implement the API if there is nobody works on it. :)

BR,

dwj

















Original Mail



Sender:  <alexey.w...@nokia.com>
To:  <openstack-dev@lists.openstack.org> <trinath.soman...@nxp.com>
Date: 2017/03/21 14:17
Subject: Re: [openstack-dev] [ALU] Re:  [vitrage] vitrage Resource API







Hi Dong,


 


At the beginning that API was a mock, and then when the Vitrage started to 
work, we haven’t implemented the API and thus we can’t show an API in the 
client that doesn’t work.


 


In order to implement that API it also needs to support Multi tenancy.


 


Alexey


 




From: dong.wenj...@zte.com.cn [mailto:dong.wenj...@zte.com.cn] 
 Sent: Tuesday, March 21, 2017 2:46 AM
 To: trinath.soman...@nxp.com
 Cc: openstack-dev@lists.openstack.org
 Subject: [ALU] Re: [openstack-dev] [vitrage] vitrage Resource API




 

No, in the implemention of these APIs.

see 
https://github.com/openstack/vitrage/blob/master/vitrage/api/controllers/v1/resource.py#L47

 

 

 


Original Mail



Sender:  <trinath.soman...@nxp.com>



To:  <openstack-dev@lists.openstack.org> <openstack-dev@lists.openstack.org>



Date: 2017/03/21 00:50



Subject: Re: [openstack-dev] [vitrage] vitrage Resource API




 


Outlook for iOS



 







From: dong.wenj...@zte.com.cn <dong.wenj...@zte.com.cn>
 Sent: Monday, March 20, 2017 2:49:57 PM
 To: openstack-dev@lists.openstack.org
 Subject: [openstack-dev] [vitrage] vitrage Resource API 


 



Hi All,

 

I noticed that the APIs of `resource list` and `resource show`  were mocked.

Is  there any backgroud for the mock or the API is not necessary?

 

BR,

dwj__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [i18n] [nova] understanding log domain change - https://review.openstack.org/#/c/439500

2017-03-21 Thread Akihiro Motoki
2017-03-17 5:50 GMT+09:00 Ihar Hrachyshka :
> On Thu, Mar 16, 2017 at 12:00 PM, Doug Hellmann  wrote:
>> Please keep translations for exceptions and other user-facing messages,
>> for now.
>
> To clarify, that means LOG.exception(_LE(...)) should also be cleaned
> up? The only things that we should leave are messages that eventually
> get to users (by means of stdout for clients, or thru API payload, for
> services).

Yes, all logging markers including LOG.exception(_LE(...)) will be clean up.
Only user visible messages through API messages should be marked as
translatable.

>
> Ihar
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ceilometer]Can't find meter anywhere with ceilometer post REST API

2017-03-21 Thread Yurii Prokulevych
Pipeline's config looks good. Could U please enable debug/verbose in
ceilometer.conf and check ceilometer/collector.log ?

---
Yurii

On Tue, 2017-03-21 at 11:40 +0800, Hui Xiang wrote:
> Thanks gordon for your info.
> 
> The reason why not using gnocchi in mitaka is that we are using
> collectd-ceilometer-plugin[1] to posting samples to ceilometer
> through ceilometer-api, after mitaka yes we will all move to gnocchi.
> 
> 
> """
> when posting samples to ceilometer-api, the data goes through
> pipeline before being stored. therefore, you need notification-agent
> enabled AND you need to make sure the pipeline.yaml accepts the
> meter.
> """
> As the samples posted doesn't have event_type, so I guess you mean I
> don't need to edit the event_pipeline.yaml, but need to edit the
> pipeline.yaml to accepts the meter. Could you kindly check whether
> below simple example make sense to accept the meter?  Does the source
> name need to match the source field in the sample or it can be
> defined as anyone. 
> 
> > [{"counter_name": "interface.if_errors",
> >   "user_id": "5457b977c25e4498a31a3c1c78829631",
> >   "resource_id": "localhost-ovs-system",
> >   "timestamp": "2017-03-17T02:26:46",
> >   "resource_metadata": {},
> >   "source": "5b1525a8eb2d4739a83b296682aed023:collectd",
> >   "counter_unit": "Errors/s",
> >   "counter_volume": 0.0,
> >   "project_id": "5b1525a8eb2d4739a83b296682aed023",
> >   "message_id": "2b4ce294-0ab9-11e7-8058-026ea687824d",
> >   "counter_type": "delta"},
> >
> 
> 
> sources:
>     - name: meter_source
>       interval: 60
>       meters:
>           - "interface.if_errors"
>       sinks:
>           - meter_sink
> 
> sinks:
>     - name: meter_sink
>       transformers:
>       publishers:
>           - notifier://
> 
> 
> Does the source name need to matching the source field in the sample
> or it can be defined as any.
> 
> [1]. https://github.com/openstack/collectd-ceilometer-plugin
> 
> 
> Thanks.
> Hui.
> 
> 
> On Tue, Mar 21, 2017 at 4:21 AM, gordon chung  wrote:
> > 
> > 
> > On 18/03/17 04:54 AM, Hui Xiang wrote:
> > > Hi folks,
> > >
> > >   I am trying to post samples from third part software to
> > ceilometer via
> > > the REST API as below with Mitaka version. I can see ceilometer-
> > api has
> > > received this post, and seems forwarded to ceilometer
> > notification agent
> > > through RMQ.
> > >
> > 
> > first and most importantly, the ceilometer-api is deprecated and
> > not
> > supported upstream anymore. please use gnocchi for proper time
> > series
> > storage (or whatever storage solution you feel comfortable with)
> > 
> > >
> > > 2. LOG
> > > 56:17] "*POST /v2/meters/interface.if_packets HTTP/1.1*" 201 -
> > > 2017-03-17 16:56:17.378 52955 DEBUG
> > oslo_messaging._drivers.amqpdriver
> > > [req-1c4ea84d-ea53-4518-81ea-6c0bffa9745d
> > > 5457b977c25e4498a31a3c1c78829631 5b1525a8eb2d4739a83b296682aed023
> > - - -]
> > > CAST unique_id: 64a6bae3bbcc4b7dab4dceb13cf7f81b NOTIFY exchange
> > > 'ceilometer' topic 'notifications.sample' _send
> > > /usr/lib/python2.7/site-
> > packages/oslo_messaging/_drivers/amqpdriver.py:438
> > > 2017-03-17 16:56:17.382 52955 INFO werkzeug
> > > [req-1c4ea84d-ea53-4518-81ea-6c0bffa9745d
> > > 5457b977c25e4498a31a3c1c78829631 5b1525a8eb2d4739a83b296682aed023
> > - - -]
> > > 192.168.0.3 - - [17/Mar/2017
> > >
> > >
> > > 3. REST API return result
> > > [{"counter_name": "interface.if_errors",
> > >   "user_id": "5457b977c25e4498a31a3c1c78829631",
> > >   "resource_id": "localhost-ovs-system",
> > >   "timestamp": "2017-03-17T02:26:46",
> > >   "resource_metadata": {},
> > >   "source": "5b1525a8eb2d4739a83b296682aed023:collectd",
> > >   "counter_unit": "Errors/s",
> > >   "counter_volume": 0.0,
> > >   "project_id": "5b1525a8eb2d4739a83b296682aed023",
> > >   "message_id": "2b4ce294-0ab9-11e7-8058-026ea687824d",
> > >   "counter_type": "delta"},
> > >
> > 
> > when posting samples to ceilometer-api, the data goes through
> > pipeline
> > before being stored. therefore, you need notification-agent enabled
> > AND
> > you need to make sure the pipeline.yaml accepts the meter.
> > 
> > --
> > gord
> > 
> > ___
> > ___
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsu
> > bscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> _
> _
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [mogan] Nominating liusheng for Mogan core

2017-03-21 Thread hao wang
+1

Thanks LiuSheng for good reviewing and keeping contribution.

2017-03-20 16:41 GMT+08:00 Rui Chen :
> +1
>
> Liusheng is a responsible reviewer and keep good reviewing quality in Mogan.
>
> Thank you working hard for Mogan, Liusheng.
>
> 2017-03-20 16:19 GMT+08:00 Zhenguo Niu :
>>
>> Hi team,
>>
>> I would like to nominate liusheng to Mogan core. Liusheng has been a
>> significant code contributor since the project's creation providing high
>> quality reviews.
>>
>> Please feel free to respond in public or private your support or any
>> concerns.
>>
>>
>> Thanks,
>> Zhenguo
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ALU] Re: [vitrage] vitrage Resource API

2017-03-21 Thread Weyl, Alexey (Nokia - IL/Kfar Sava)
Hi Dong,

At the beginning that API was a mock, and then when the Vitrage started to 
work, we haven’t implemented the API and thus we can’t show an API in the 
client that doesn’t work.

In order to implement that API it also needs to support Multi tenancy.

Alexey

From: dong.wenj...@zte.com.cn [mailto:dong.wenj...@zte.com.cn]
Sent: Tuesday, March 21, 2017 2:46 AM
To: trinath.soman...@nxp.com
Cc: openstack-dev@lists.openstack.org
Subject: [ALU] Re: [openstack-dev] [vitrage] vitrage Resource API


No, in the implemention of these APIs.

see 
https://github.com/openstack/vitrage/blob/master/vitrage/api/controllers/v1/resource.py#L47






Original Mail
Sender:  <trinath.soman...@nxp.com>;
To:  <openstack-dev@lists.openstack.org>; <openstack-dev@lists.openstack.org>;
Date: 2017/03/21 00:50
Subject: Re: [openstack-dev] [vitrage] vitrage Resource API


Outlook for iOS


From: dong.wenj...@zte.com.cn 
<dong.wenj...@zte.com.cn>
Sent: Monday, March 20, 2017 2:49:57 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [vitrage] vitrage Resource API


Hi All,



I noticed that the APIs of `resource list` and `resource show`  were mocked.

Is  there any backgroud for the mock or the API is not necessary?



BR,

dwj








__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev