Re: [openstack-dev] Swift3 Plugin Development

2017-06-08 Thread Niels de Vos
On Thu, Jun 08, 2017 at 10:52:53PM -0600, Pete Zaitcev wrote:
> On Thu, 8 Jun 2017 17:06:02 +0530
> Venkata R Edara  wrote:
> 
> > we are looking for S3 plugin with ACLS so that we can integrate gluster 
> > with that.
> 
> Did you look into porting Ceph RGW on top of Gluster?

This is one of the longer term options that we have under consideration.
I am very interested in your reasons to suggest it, care to elaborate a
little?

Thanks,
Niels


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Swift3 Plugin Development

2017-06-08 Thread Pete Zaitcev
On Thu, 8 Jun 2017 17:06:02 +0530
Venkata R Edara  wrote:

> we are looking for S3 plugin with ACLS so that we can integrate gluster 
> with that.

Did you look into porting Ceph RGW on top of Gluster?

-- P

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] etcd3 as base service - update

2017-06-08 Thread Joshua Harlow

Julien Danjou wrote:

On Thu, Jun 08 2017, Mike Bayer wrote:


So far I've seen a proposal of etcd3 as a replacement for memcached in
keystone, and a new dogpile connector was added to oslo.cache to handle
referring to etcd3 as a cache backend.  This is a really simplistic / minimal
kind of use case for a key-store.


etcd3 is not meant to be a cache. Synchronizing caching value using the
Raft protocol sounds a bit overkill. A cluster of memcached would be
probably of a better use.


Agreed from me,

My thinking is that people should look over https://raft.github.io/ or 
http://thesecretlivesofdata.com/raft/ (or both or others...)


At least read how it sort of works, before trying to put it everywhere 
(the same can and should be said for any new service), because its not a 
solution for all the things.


The other big thing to know is how writes happen in this kind of system, 
they all go through a single node (the leader, who sends the same data 
to followers and waits for a certain number of followers to respond 
before committing)


Anyways, with great power comes great responsibility...

IMHO just be careful and understand the technology before using it for 
things it may not really be good for. Oh ya and perhaps someone will 
want to finally take more advantage of 
https://docs.openstack.org/developer/taskflow/jobs.html#overview (which 
uses the same concepts etcd exposes to make highly available workflows 
that can survive node failure).





But, keeping in mind I don't know anything about etcd3 other than "it's another
key-store", it's the only database used by Kubernetes as a whole, which
suggests it's doing a better job than Redis in terms of "durable".


Not sure about that. And Redis has much more data structure than etcd,
that is can be faster/more efficient than etcd. But it does not have
Raft and a synchronisation protocol. Its clustering is rather poor in
comparison of etcd.


So I wouldn't be surprised if new / existing openstack applications
express some gravitational pull towards using it as their own
datastore as well. I'll be trying to hang onto the etcd3 track as much
as possible so that if/when that happens I still have a job :).


Sounds like a recipe for disaster. :)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

2017-06-08 Thread Matt Riedemann

On 6/8/2017 6:17 PM, John Griffith wrote:
​The attachment_update call could do this for you, might need some 
slight tweaks because I tried to make sure that we weren't having 
attachment records be modified things that lived forever and were 
dynamic.  This particular case seems like a descent fit though, issue 
the call; cinder queries the backend to get any updated connection info 
and sends it back.  I'd leave it to Nova to figure out if said info has 
been updated or not.  Just iterate through the attachment_ids in the bdm 
and update/refresh each one maybe?


Yeah, although we have to keep in mind that's a new API we're not even 
using yet for volume attach, so anything I'm thinking about here has to 
handle old-style attachments (old-style as in, you know, today). Plus we 
don't have a migration plan yet for the old style attachments to the new 
style. At the Pike PTG we said we'd work on that in Queens.


I definitely want to use new shiny things at some point, we just have to 
handle the old crufty things too.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

2017-06-08 Thread Matt Riedemann

On 6/8/2017 1:39 PM, melanie witt wrote:

On Thu, 8 Jun 2017 08:58:20 -0500, Matt Riedemann wrote:
Nova stores the output of the Cinder os-initialize_connection info API 
in the Nova block_device_mappings table, and uses that later for 
making volume connections.


This data can get out of whack or need to be refreshed, like if your 
ceph server IP changes, or you need to recycle some secret uuid for 
your ceph cluster.


I think the only ways to do this on the nova side today are via volume 
detach/re-attach, reboot, migrations, etc - all of which, except live 
migration, are disruptive to the running guest.


I believe the only way to work around this currently is by doing a 'nova 
shelve' followed by a 'nova unshelve'. That will end up querying the 
connection_info from Cinder and update the block device mapping record 
for the instance. Maybe detach/re-attach would work too but I can't 
remember trying it.


Shelve has it's own fun set of problems like the fact it doesn't 
terminate the connection to the volume backend on shelve. Maybe that's 
not a problem for Ceph, I don't know. You do end up on another host 
though potentially, and it's a full delete and spawn of the guest on 
that other host. Definitely disruptive.




I've kicked around the idea of adding some sort of admin API interface 
for refreshing the BDM.connection_info on-demand if needed by an 
operator. Does anyone see value in this? Are operators doing stuff 
like this already, but maybe via direct DB updates?


We could have something in the compute API which calls down to the 
compute for an instance and has it refresh the connection_info from 
Cinder and updates the BDM table in the nova DB. It could be an admin 
action API, or part of the os-server-external-events API, like what we 
have for the 'network-changed' event sent from Neutron which nova uses 
to refresh the network info cache.


Other ideas or feedback here?


We've discussed this a few times before and we were thinking it might be 
best to handle this transparently and just do a connection_info refresh 
+ record update inline with the request flows that will end up reading 
connection_info from the block device mapping records. That way, 
operators won't have to intervene when connection_info changes.


The thing that sucks about this is if we're going to be refreshing 
something that maybe rarely changes for every volume-related operation 
on the instance. That seems like a lot of overhead to me (nova/cinder 
API interactions, Cinder interactions to the volume backend, 
nova-compute round trips to conductor and the DB to update the BDM 
table, etc).




At least in the case of Ceph, as long as a guest is running, it will 
continue to work fine if the monitor IPs or secrets change because it 
will continue to use its existing connection to the Ceph cluster. Things 
go wrong when an instance action such as resize, stop/start, or reboot 
is done because when the instance is taken offline and being brought 
back up, the stale connection_info is read from the block_device_mapping 
table and injected into the instance, and so it loses contact with the 
cluster. If we query Cinder and update the block_device_mapping record 
at the beginning of those actions, the instance will get the new 
connection_info.


-melanie





--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] Goodbye^W See you later

2017-06-08 Thread Zhenguo Niu
Thanks for everything jroll and best wishes on your new endeavors!

On Thu, Jun 8, 2017 at 8:45 PM, Jim Rollenhagen 
wrote:

> Hey friends,
>
> I've been mostly missing for the past six weeks while looking for a new
> job, so maybe you've forgotten me already, maybe not. I'm happy to tell you
> I've found one that I think is a great opportunity for me. But, I'm sad to
> tell you that it's totally outside of the OpenStack community.
>
> The last 3.5 years have been amazing. I'm extremely grateful that I've
> been able to work in this community - I've learned so much and met so many
> awesome people. I'm going to miss the insane(ly awesome) level of
> collaboration, the summits, the PTGs, and even some of the bikeshedding.
> We've built amazing things together, and I'm sure y'all will continue to do
> so without me.
>
> I'll still be lurking in #openstack-dev and #openstack-ironic for a while,
> if people need me to drop a -2 or dictate old knowledge or whatever, feel
> free to ping me. Or if you just want to chat. :)
>
> <3 jroll
>
> P.S. obviously my core permissions should be dropped now :P
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Best Regards,
Zhenguo Niu
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] Goodbye^W See you later

2017-06-08 Thread ChangBo Guo
Jim,
 Though I wasn't involved Ironic too much,  I know you're one of the best
Ironicer.
  Good luck!
2017-06-09 9:33 GMT+08:00 phuon...@vn.fujitsu.com :

> Hi Jim, good luck to you. Remember the time we discussed right after you
> waked up in the morning. :)
>
>
>
> Phuong.
>
>
>
> *From:* Jim Rollenhagen [mailto:j...@jimrollenhagen.com]
> *Sent:* Thursday, June 08, 2017 7:45 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* [openstack-dev] [ironic][nova] Goodbye^W See you later
>
>
>
> Hey friends,
>
>
>
> I've been mostly missing for the past six weeks while looking for a new
> job, so maybe you've forgotten me already, maybe not. I'm happy to tell you
> I've found one that I think is a great opportunity for me. But, I'm sad to
> tell you that it's totally outside of the OpenStack community.
>
>
>
> The last 3.5 years have been amazing. I'm extremely grateful that I've
> been able to work in this community - I've learned so much and met so many
> awesome people. I'm going to miss the insane(ly awesome) level of
> collaboration, the summits, the PTGs, and even some of the bikeshedding.
> We've built amazing things together, and I'm sure y'all will continue to do
> so without me.
>
>
>
> I'll still be lurking in #openstack-dev and #openstack-ironic for a while,
> if people need me to drop a -2 or dictate old knowledge or whatever, feel
> free to ping me. Or if you just want to chat. :)
>
>
>
> <3 jroll
>
>
>
> P.S. obviously my core permissions should be dropped now :P
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
ChangBo Guo(gcb)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] Goodbye^W See you later

2017-06-08 Thread phuon...@vn.fujitsu.com
Hi Jim, good luck to you. Remember the time we discussed right after you waked 
up in the morning. :)

Phuong.

From: Jim Rollenhagen [mailto:j...@jimrollenhagen.com]
Sent: Thursday, June 08, 2017 7:45 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [ironic][nova] Goodbye^W See you later

Hey friends,

I've been mostly missing for the past six weeks while looking for a new job, so 
maybe you've forgotten me already, maybe not. I'm happy to tell you I've found 
one that I think is a great opportunity for me. But, I'm sad to tell you that 
it's totally outside of the OpenStack community.

The last 3.5 years have been amazing. I'm extremely grateful that I've been 
able to work in this community - I've learned so much and met so many awesome 
people. I'm going to miss the insane(ly awesome) level of collaboration, the 
summits, the PTGs, and even some of the bikeshedding. We've built amazing 
things together, and I'm sure y'all will continue to do so without me.

I'll still be lurking in #openstack-dev and #openstack-ironic for a while, if 
people need me to drop a -2 or dictate old knowledge or whatever, feel free to 
ping me. Or if you just want to chat. :)

<3 jroll

P.S. obviously my core permissions should be dropped now :P
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-08 Thread Fox, Kevin M
Flavio: I think your right. k8s configmaps and confd are doing very similar 
things. The one thing confd seems to add is dynamic templates on the host side. 
This is still accomplished in k8s with a sidecar watching for config changes 
with the templating engine in it and an emptyDir. or statically with an init 
container and an emptyDir (kolla-kubernetes does the latter)

But, for k8s, I actually prefer a fully atomic container config model, where 
you do a rolling upgrade any time you want to do a configmap change. k8s gives 
you the plumbing to do that and you can more easily roll forward/backward, 
allowing you versioning too.

So, I think your right. etcd/confd is more suited to the non k8s deployments.

Thanks,
Kevin

From: Flavio Percoco [fla...@redhat.com]
Sent: Thursday, June 08, 2017 3:28 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] 
[helm] Configuration management with etcd / confd



On Thu, Jun 8, 2017, 19:14 Doug Hellmann 
> wrote:
Excerpts from Flavio Percoco's message of 2017-06-08 18:27:51 +0200:
> On 08/06/17 18:23 +0200, Flavio Percoco wrote:
> >On 07/06/17 12:04 +0200, Bogdan Dobrelya wrote:
> >>On 06.06.2017 18:08, Emilien Macchi wrote:
> >>>Another benefit is that confd will generate a configuration file when
> >>>the application will start. So if etcd is down *after* the app
> >>>startup, it shouldn't break the service restart if we don't ask confd
> >>>to re-generate the config. It's good for operators who were concerned
> >>>about the fact the infrastructure would rely on etcd. In that case, we
> >>>would only need etcd at the initial deployment (and during lifecycle
> >>>actions like upgrades, etc).
> >>>
> >>>The downside is that in the case of containers, they would still have
> >>>a configuration file within the container, and the whole goal of this
> >>>feature was to externalize configuration data and stop having
> >>>configuration files.
> >>
> >>It doesn't look a strict requirement. Those configs may (and should) be
> >>bind-mounted into containers, as hostpath volumes. Or, am I missing
> >>something what *does* make embedded configs a strict requirement?..
> >
> >mmh, one thing I liked about this effort was possibility of stop 
> >bind-mounting
> >config files into the containers. I'd rather find a way to not need any
> >bindmount and have the services get their configs themselves.
>
> Probably sent too early!
>
> If we're not talking about OpenStack containers running in a COE, I guess this
> is fine. For k8s based deployments, I think I'd prefer having installers
> creating configmaps directly and use that. The reason is that depending on 
> files
> that are in the host is not ideal for these scenarios. I hate this idea 
> because
> it makes deployments inconsistent and I don't want that.
>
> Flavio
>

I'm not sure I understand how a configmap is any different from what is
proposed with confd in terms of deployment-specific data being added to
a container before it launches. Can you elaborate on that?


Unless I'm missing something, to use confd with an OpenStack deployment on k8s, 
we'll have to do something like this:

* Deploy confd in every node where we may want to run a pod (basically wvery 
node)
* Configure it to download all configs from etcd locally (we won't be able to 
download just some of them because we don't know what services may run in 
specific nodes. Except, perhaps, in the case of compute nodes and some other 
similar nodes)
* Enable hostpath volumes (iirc it's disabled by default) so that we can mount 
these files in the pod
* Run the pods and mount the files assuming the files are there.

All of the above is needed because  confd syncs files locally from etcd. Having 
a centralized place to manage these configs allows for controlling the 
deployment better. For example, if a configmap doesn't exist, then stop 
everything.

Not trying to be negative but rather explain why I think confd may not work 
well for the k8s based deployments. I think it's a good fit for the rest of the 
deployments.

Am I missing something? Am I overcomplicating things?

Flavio
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

2017-06-08 Thread John Griffith
On Thu, Jun 8, 2017 at 7:58 AM, Matt Riedemann  wrote:

> Nova stores the output of the Cinder os-initialize_connection info API in
> the Nova block_device_mappings table, and uses that later for making volume
> connections.
>
> This data can get out of whack or need to be refreshed, like if your ceph
> server IP changes, or you need to recycle some secret uuid for your ceph
> cluster.
>
> I think the only ways to do this on the nova side today are via volume
> detach/re-attach, reboot, migrations, etc - all of which, except live
> migration, are disruptive to the running guest.
>
> I've kicked around the idea of adding some sort of admin API interface for
> refreshing the BDM.connection_info on-demand if needed by an operator. Does
> anyone see value in this? Are operators doing stuff like this already, but
> maybe via direct DB updates?
>
> We could have something in the compute API which calls down to the compute
> for an instance and has it refresh the connection_info from Cinder and
> updates the BDM table in the nova DB. It could be an admin action API, or
> part of the os-server-external-events API, like what we have for the
> 'network-changed' event sent from Neutron which nova uses to refresh the
> network info cache.
>
> Other ideas or feedback here?
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
​The attachment_update call could do this for you, might need some slight
tweaks because I tried to make sure that we weren't having attachment
records be modified things that lived forever and were dynamic.  This
particular case seems like a descent fit though, issue the call; cinder
queries the backend to get any updated connection info and sends it back.
I'd leave it to Nova to figure out if said info has been updated or not.
Just iterate through the attachment_ids in the bdm and update/refresh each
one maybe?
​
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [keystone] Specification Freeze

2017-06-08 Thread Lance Bragstad
Happy Stanley-Cup-Playoff-Game-5 Day,

Sending out a quick reminder that tomorrow is specification freeze. I'll be
making a final push for specifications that target Pike work tomorrow. I'd
also like to merge others to backlog as we see fit.

By EOD tomorrow, I'll go through and put procedural -2's on the remaining
specs.

Thanks,

Lance
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-08 Thread Flavio Percoco
On Thu, Jun 8, 2017, 19:14 Doug Hellmann  wrote:

> Excerpts from Flavio Percoco's message of 2017-06-08 18:27:51 +0200:
> > On 08/06/17 18:23 +0200, Flavio Percoco wrote:
> > >On 07/06/17 12:04 +0200, Bogdan Dobrelya wrote:
> > >>On 06.06.2017 18:08, Emilien Macchi wrote:
> > >>>Another benefit is that confd will generate a configuration file when
> > >>>the application will start. So if etcd is down *after* the app
> > >>>startup, it shouldn't break the service restart if we don't ask confd
> > >>>to re-generate the config. It's good for operators who were concerned
> > >>>about the fact the infrastructure would rely on etcd. In that case, we
> > >>>would only need etcd at the initial deployment (and during lifecycle
> > >>>actions like upgrades, etc).
> > >>>
> > >>>The downside is that in the case of containers, they would still have
> > >>>a configuration file within the container, and the whole goal of this
> > >>>feature was to externalize configuration data and stop having
> > >>>configuration files.
> > >>
> > >>It doesn't look a strict requirement. Those configs may (and should) be
> > >>bind-mounted into containers, as hostpath volumes. Or, am I missing
> > >>something what *does* make embedded configs a strict requirement?..
> > >
> > >mmh, one thing I liked about this effort was possibility of stop
> bind-mounting
> > >config files into the containers. I'd rather find a way to not need any
> > >bindmount and have the services get their configs themselves.
> >
> > Probably sent too early!
> >
> > If we're not talking about OpenStack containers running in a COE, I
> guess this
> > is fine. For k8s based deployments, I think I'd prefer having installers
> > creating configmaps directly and use that. The reason is that depending
> on files
> > that are in the host is not ideal for these scenarios. I hate this idea
> because
> > it makes deployments inconsistent and I don't want that.
> >
> > Flavio
> >
>
> I'm not sure I understand how a configmap is any different from what is
> proposed with confd in terms of deployment-specific data being added to
> a container before it launches. Can you elaborate on that?
>
>
Unless I'm missing something, to use confd with an OpenStack deployment on
k8s, we'll have to do something like this:

* Deploy confd in every node where we may want to run a pod (basically
wvery node)
* Configure it to download all configs from etcd locally (we won't be able
to download just some of them because we don't know what services may run
in specific nodes. Except, perhaps, in the case of compute nodes and some
other similar nodes)
* Enable hostpath volumes (iirc it's disabled by default) so that we can
mount these files in the pod
* Run the pods and mount the files assuming the files are there.

All of the above is needed because  confd syncs files locally from etcd.
Having a centralized place to manage these configs allows for controlling
the deployment better. For example, if a configmap doesn't exist, then stop
everything.

Not trying to be negative but rather explain why I think confd may not work
well for the k8s based deployments. I think it's a good fit for the rest of
the deployments.

Am I missing something? Am I overcomplicating things?

Flavio
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-08 Thread Flavio Percoco
On Thu, Jun 8, 2017, 19:51 Steven Dake (stdake)  wrote:

> Flavio,
>
> Atleast for the kubernetes variant of kolla, bindmounting will always be
> used as this is fundamentally how configmaps operate.  In order to maintain
> maximum flexilbility and compatibility with kubernetes, I am not keen to
> try a non-configmap way of doing things.
>

I was referring​ to bindmounts of files that were created in the host and
reside in the host. While configmaps are bindmounts, they don't really live
in the host until the pod/container is created.

Flavio


> Regards
> -steve
>
> -Original Message-
> From: Flavio Percoco 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Date: Thursday, June 8, 2017 at 9:23 AM
> To: "OpenStack Development Mailing List (not for usage questions)" <
> openstack-dev@lists.openstack.org>
> Subject: Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo]
> [kolla] [helm] Configuration management with etcd / confd
>
> On 07/06/17 12:04 +0200, Bogdan Dobrelya wrote:
> >On 06.06.2017 18:08, Emilien Macchi wrote:
> >> Another benefit is that confd will generate a configuration file
> when
> >> the application will start. So if etcd is down *after* the app
> >> startup, it shouldn't break the service restart if we don't ask
> confd
> >> to re-generate the config. It's good for operators who were
> concerned
> >> about the fact the infrastructure would rely on etcd. In that case,
> we
> >> would only need etcd at the initial deployment (and during lifecycle
> >> actions like upgrades, etc).
> >>
> >> The downside is that in the case of containers, they would still
> have
> >> a configuration file within the container, and the whole goal of
> this
> >> feature was to externalize configuration data and stop having
> >> configuration files.
> >
> >It doesn't look a strict requirement. Those configs may (and should)
> be
> >bind-mounted into containers, as hostpath volumes. Or, am I missing
> >something what *does* make embedded configs a strict requirement?..
>
> mmh, one thing I liked about this effort was possibility of stop
> bind-mounting
> config files into the containers. I'd rather find a way to not need any
> bindmount and have the services get their configs themselves.
>
> Flavio
>
>
> --
> @flaper87
> Flavio Percoco
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][api] POST /api-wg/news

2017-06-08 Thread Edward Leafe
On Jun 8, 2017, at 11:59 AM, Chris Dent  wrote:
> 
> The naming issue related to an optional field we want to add to the 
> microversion discovery document. Some projects wish to be able to signal that 
> they are intending to raise the minimum microversion at a point in the 
> future. The name for the next minimum version is fairly clear: 
> "next_min_version". What's less clear is the name which can be used for the 
> field that states the earliest date at which this will happen. This cannot be 
> a definitive date because different deployments will release the new code at 
> different times. We can only say "it will be no earlier than this time".
> 
> Naming this field has proven difficult. The original was "not_before", but 
> that has no association with "min_version" so is potentially confusing. 
> However, people who know how to parse the doc will know what it means so it 
> may not matter. As always, naming is hard, so we seek input from the 
> community to help us find a suitable name. This is something we don't want to 
> ever have to change, so it needs to be correct from the start. Candidates 
> include:
> 
> * not_before
> * not_raise_min_before
> * min_raise_not_before
> * earliest_min_raise_date
> * min_version_eol_date
> * next_min_version_effective_date
> 
> If you have an opinion on any of these, or a better suggestion please let us 
> know, either on the review at  >, or in response to this message.


Even better: please respond on the SurveyMonkey page to rank your preferences!

https://www.surveymonkey.com/r/L7XWNG5 



-- Ed Leafe





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] etcd3 as base service - update

2017-06-08 Thread Lance Bragstad
On Thu, Jun 8, 2017 at 3:21 PM, Emilien Macchi  wrote:

> On Thu, Jun 8, 2017 at 7:34 PM, Lance Bragstad 
> wrote:
> > After digging into etcd a bit, one place this might be help deployer
> > experience would be the handling of fernet keys for token encryption in
> > keystone. Currently, all keys used to encrypt and decrypt tokens are
> kept on
> > disk for each keystone node in the deployment. While simple, it requires
> > operators to perform rotation on a single node and then push, or sync,
> the
> > new key set to the rest of the nodes. This must be done in lock step in
> > order to prevent early token invalidation and inconsistent token
> responses.
>
> This is what we discussed a few months ago :-)
>
> http://lists.openstack.org/pipermail/openstack-dev/2017-March/113943.html
>
> I'm glad it's coming back ;-)
>

Yep! I've proposed a pretty basic spec to backlog [0] in an effort to
capture the discussion. I've also noted the point Kevin brought up about
authorization in etcd (thanks, Kevin!)

If someone feels compelled to take that and run with it, they are more than
welcome to.

[0] https://review.openstack.org/#/c/472385/


> > An alternative would be to keep the keys in etcd and make the fernet bits
> > pluggable so that it's possible to read keys from disk or etcd (pending
> > configuration). The advantage would be that operators could initiate key
> > rotations from any keystone node in the deployment (or using etcd
> directly)
> > and not have to worry about distributing the new key set. Since etcd
> > associates metadata to the key-value pairs, we might be able to simplify
> the
> > rotation strategy as well.
> >
> > On Thu, Jun 8, 2017 at 11:37 AM, Mike Bayer  wrote:
> >>
> >>
> >>
> >> On 06/08/2017 12:47 AM, Joshua Harlow wrote:
> >>>
> >>> So just out of curiosity, but do people really even know what etcd is
> >>> good for? I am thinking that there should be some guidance from folks
> in the
> >>> community as to where etcd should be used and where it shouldn't
> (otherwise
> >>> we just all end up in a mess).
> >>
> >>
> >> So far I've seen a proposal of etcd3 as a replacement for memcached in
> >> keystone, and a new dogpile connector was added to oslo.cache to handle
> >> referring to etcd3 as a cache backend.  This is a really simplistic /
> >> minimal kind of use case for a key-store.
> >>
> >> But, keeping in mind I don't know anything about etcd3 other than "it's
> >> another key-store", it's the only database used by Kubernetes as a
> whole,
> >> which suggests it's doing a better job than Redis in terms of "durable".
> >> So I wouldn't be surprised if new / existing openstack applications
> express
> >> some gravitational pull towards using it as their own datastore as well.
> >> I'll be trying to hang onto the etcd3 track as much as possible so that
> >> if/when that happens I still have a job :).
> >>
> >>
> >>
> >>
> >>>
> >>> Perhaps a good idea to actually give examples of how it should be used,
> >>> how it shouldn't be used, what it offers, what it doesn't... Or at
> least
> >>> provide links for people to read up on this.
> >>>
> >>> Thoughts?
> >>>
> >>> Davanum Srinivas wrote:
> 
>  One clarification: Since https://pypi.python.org/pypi/etcd3gw just
>  uses the HTTP API (/v3alpha) it will work under both eventlet and
>  non-eventlet environments.
> 
>  Thanks,
>  Dims
> 
> 
>  On Wed, Jun 7, 2017 at 6:47 AM, Davanum Srinivas
>  wrote:
> >
> > Team,
> >
> > Here's the update to the base services resolution from the TC:
> > https://governance.openstack.org/tc/reference/base-services.html
> >
> > First request is to Distros, Packagers, Deployers, anyone who
> > installs/configures OpenStack:
> > Please make sure you have latest etcd 3.x available in your
> > environment for Services to use, Fedora already does, we need help in
> > making sure all distros and architectures are covered.
> >
> > Any project who want to use etcd v3 API via grpc, please use:
> > https://pypi.python.org/pypi/etcd3 (works only for non-eventlet
> > services)
> >
> > Those that depend on eventlet, please use the etcd3 v3alpha HTTP API
> > using:
> > https://pypi.python.org/pypi/etcd3gw
> >
> > If you use tooz, there are 2 driver choices for you:
> > https://github.com/openstack/tooz/blob/master/setup.cfg#L29
> > https://github.com/openstack/tooz/blob/master/setup.cfg#L30
> >
> > If you use oslo.cache, there is a driver for you:
> > https://github.com/openstack/oslo.cache/blob/master/setup.cfg#L33
> >
> > Devstack installs etcd3 by default and points cinder to it:
> > http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/etcd3
> >
> > http://git.openstack.org/cgit/openstack-dev/devstack/tree/
> lib/cinder#n356
> >
> > Review in progress for 

Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-08 Thread Fox, Kevin M
Because tools to manipulate json and or yaml are very common.

Tools to manipulate a psudo ini file format that isn't standards compliant are 
not. :/

Thanks,
Kevin

From: Doug Hellmann [d...@doughellmann.com]
Sent: Thursday, June 08, 2017 1:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] 
[helm] Configuration management with etcd / confd

> On Jun 8, 2017, at 4:29 PM, Fox, Kevin M  wrote:
>
> That is possible. But, a yaml/json driver might still be good, regardless of 
> the mechanism used to transfer the file.
>
> So the driver abstraction still might be useful.

Why would it be useful to have oslo.config read files in more than one format?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] pike m2 has been released

2017-06-08 Thread Emilien Macchi
We have a new release of TripleO, pike milestone 2.
All bugs targeted on Pike-2 have been moved into Pike-3.

I'll take care of moving the blueprints into Pike-3.

Some numbers:
Blueprints: 3 Unknown, 18 Not started, 14 Started, 3 Slow progress, 11
Good progress, 9 Needs Code Review, 7 Implemented
Bugs: 197 Fix Released

Thanks everyone!
-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-08 Thread Doug Hellmann

> On Jun 8, 2017, at 4:29 PM, Fox, Kevin M  wrote:
> 
> That is possible. But, a yaml/json driver might still be good, regardless of 
> the mechanism used to transfer the file.
> 
> So the driver abstraction still might be useful.

Why would it be useful to have oslo.config read files in more than one format?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] etcd3 as base service - update

2017-06-08 Thread Davanum Srinivas
On Thu, Jun 8, 2017 at 4:07 PM, Drew Fisher  wrote:
>
>
> On 6/7/17 4:47 AM, Davanum Srinivas wrote:
>> Team,
>>
>> Here's the update to the base services resolution from the TC:
>> https://governance.openstack.org/tc/reference/base-services.html
>>
>> First request is to Distros, Packagers, Deployers, anyone who
>> installs/configures OpenStack:
>> Please make sure you have latest etcd 3.x available in your
>> environment for Services to use, Fedora already does, we need help in
>> making sure all distros and architectures are covered.
>
> As a Solaris OpenStack dev, I have a questions about this change.
>
> If Solaris were to *only* run the nova-compute service, and leave the
> rest of the OpenStack services to Linux, is etcd 3.x required on the
> compute node for Pike+ ?

Yes, this should be fine.

> Thanks!
>
> -Drew
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] etcd3 as base service - update

2017-06-08 Thread Fox, Kevin M
See the footer at the bottom of this email.

From: jimi olugboyega [jimiolugboy...@gmail.com]
Sent: Thursday, June 08, 2017 1:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] etcd3 as base service - update

Hello all,

I am wondering how I can unsubscribe from this mailing list.

Regards,
Olujimi Olugboyega.

On Wed, Jun 7, 2017 at 3:47 AM, Davanum Srinivas 
> wrote:
Team,

Here's the update to the base services resolution from the TC:
https://governance.openstack.org/tc/reference/base-services.html

First request is to Distros, Packagers, Deployers, anyone who
installs/configures OpenStack:
Please make sure you have latest etcd 3.x available in your
environment for Services to use, Fedora already does, we need help in
making sure all distros and architectures are covered.

Any project who want to use etcd v3 API via grpc, please use:
https://pypi.python.org/pypi/etcd3 (works only for non-eventlet services)

Those that depend on eventlet, please use the etcd3 v3alpha HTTP API using:
https://pypi.python.org/pypi/etcd3gw

If you use tooz, there are 2 driver choices for you:
https://github.com/openstack/tooz/blob/master/setup.cfg#L29
https://github.com/openstack/tooz/blob/master/setup.cfg#L30

If you use oslo.cache, there is a driver for you:
https://github.com/openstack/oslo.cache/blob/master/setup.cfg#L33

Devstack installs etcd3 by default and points cinder to it:
http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/etcd3
http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/cinder#n356

Review in progress for keystone to use etcd3 for caching:
https://review.openstack.org/#/c/469621/

Doug is working on proposal(s) for oslo.config to store some
configuration in etcd3:
https://review.openstack.org/#/c/454897/

So, feel free to turn on / test with etcd3 and report issues.

Thanks,
Dims

--
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-08 Thread Fox, Kevin M
There are two issues conflated here maybe?

The first is a mechanism to use oslo.config to dump out example settings that 
could be loaded into a reference ConfigMap or etcd or something. I think there 
is a PS up for that.

The other is a way to get the data back into oslo.config.

etcd is one way.
using a ConfigMap to ship a file into a container to be read by oslo.config 
with a json/yaml/ini file driver is another.

Thanks,
Kevin

From: Emilien Macchi [emil...@redhat.com]
Sent: Thursday, June 08, 2017 1:20 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] 
[helm] Configuration management with etcd / confd

On Thu, Jun 8, 2017 at 8:49 PM, Doug Hellmann  wrote:
> Excerpts from Steven Dake (stdake)'s message of 2017-06-08 17:51:48 +:
>> Doug,
>>
>> In short, a configmap takes a bunch of config files, bundles them in a 
>> kubernetes object called a configmap, and then ships them to etcd.  When a 
>> pod is launched, the pod mounts the configmaps using tmpfs and the raw 
>> config files are available for use by the openstack services.
>
> That sounds like what confd does. Something puts data into one of
> several possible databases. confd takes it out and writes it to
> file(s) when the container starts. The app in the container reads
> the file(s).
>
> It sounds like configmaps would work well, too, it just doesn't
> sound like a fundamentally different solution.

Sorry for my lack of knowledge in ConfigMap but I'm trying to see how
we could bring pieces together.
Doug and I are currently investigating how oslo.config can be useful
to generate the parameters loaded by the application at startup,
without having to manage config with Puppet or Ansible.

If I understand correctly (and if not, please correct me, and maybe
propose something), we could use oslo.config to generate a portion of
ConfigMap (that can be imported in another ConfigMap iiuc) where we
would have parameters for one app.

Example with Keystone:

  apiVersion: v1
  kind: ConfigMap
  metadata:
name: keystone-config
namespace: DEFAULT
  data:
debug: true
rpc_backend: rabbit
... (parameters generated by oslo.config, and data fed by installers)

So iiuc we would give this file to k8s when deploying pods. Parameters
values would be automatically pushed into etcd, and used when
generating the configuration. Am I correct? (I need to understand if
we need to manually manage etcd key/values).

In that case, what deployments tools (like Kolla, TripleO, etc) would
expect from OpenStack to provide (tooling in oslo.config to generate
ConfigMap? etc.

Thanks for your help,

> Doug
>
>>
>> Operating on configmaps is much simpler and safer than using a different 
>> backing database for the configuration data.
>>
>> Hope the information helps.
>>
>> Ping me in #openstack-kolla if you have more questions.
>>
>> Regards
>> -steve
>>
>> -Original Message-
>> From: Doug Hellmann 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>> 
>> Date: Thursday, June 8, 2017 at 10:12 AM
>> To: openstack-dev 
>> Subject: Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] 
>>[helm] Configuration management with etcd / confd
>>
>> Excerpts from Flavio Percoco's message of 2017-06-08 18:27:51 +0200:
>> > On 08/06/17 18:23 +0200, Flavio Percoco wrote:
>> > >On 07/06/17 12:04 +0200, Bogdan Dobrelya wrote:
>> > >>On 06.06.2017 18:08, Emilien Macchi wrote:
>> > >>>Another benefit is that confd will generate a configuration file 
>> when
>> > >>>the application will start. So if etcd is down *after* the app
>> > >>>startup, it shouldn't break the service restart if we don't ask 
>> confd
>> > >>>to re-generate the config. It's good for operators who were 
>> concerned
>> > >>>about the fact the infrastructure would rely on etcd. In that case, 
>> we
>> > >>>would only need etcd at the initial deployment (and during lifecycle
>> > >>>actions like upgrades, etc).
>> > >>>
>> > >>>The downside is that in the case of containers, they would still 
>> have
>> > >>>a configuration file within the container, and the whole goal of 
>> this
>> > >>>feature was to externalize configuration data and stop having
>> > >>>configuration files.
>> > >>
>> > >>It doesn't look a strict requirement. Those configs may (and should) 
>> be
>> > >>bind-mounted into containers, as hostpath volumes. Or, am I missing
>> > >>something what *does* make embedded configs a strict requirement?..
>> > >
>> > >mmh, one thing I liked about this effort was possibility of stop 
>> bind-mounting
>> > >config files into the containers. I'd rather find a way to not need 
>> any
>> > >bindmount and 

Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-08 Thread Doug Hellmann
Excerpts from Emilien Macchi's message of 2017-06-08 22:20:34 +0200:
> On Thu, Jun 8, 2017 at 8:49 PM, Doug Hellmann  wrote:
> > Excerpts from Steven Dake (stdake)'s message of 2017-06-08 17:51:48 +:
> >> Doug,
> >>
> >> In short, a configmap takes a bunch of config files, bundles them in a 
> >> kubernetes object called a configmap, and then ships them to etcd.  When a 
> >> pod is launched, the pod mounts the configmaps using tmpfs and the raw 
> >> config files are available for use by the openstack services.
> >
> > That sounds like what confd does. Something puts data into one of
> > several possible databases. confd takes it out and writes it to
> > file(s) when the container starts. The app in the container reads
> > the file(s).
> >
> > It sounds like configmaps would work well, too, it just doesn't
> > sound like a fundamentally different solution.
> 
> Sorry for my lack of knowledge in ConfigMap but I'm trying to see how
> we could bring pieces together.
> Doug and I are currently investigating how oslo.config can be useful
> to generate the parameters loaded by the application at startup,
> without having to manage config with Puppet or Ansible.
> 
> If I understand correctly (and if not, please correct me, and maybe
> propose something), we could use oslo.config to generate a portion of
> ConfigMap (that can be imported in another ConfigMap iiuc) where we
> would have parameters for one app.
> 
> Example with Keystone:
> 
>   apiVersion: v1
>   kind: ConfigMap
>   metadata:
> name: keystone-config
> namespace: DEFAULT
>   data:
> debug: true
> rpc_backend: rabbit
> ... (parameters generated by oslo.config, and data fed by installers)
> 
> So iiuc we would give this file to k8s when deploying pods. Parameters
> values would be automatically pushed into etcd, and used when
> generating the configuration. Am I correct? (I need to understand if
> we need to manually manage etcd key/values).
> 
> In that case, what deployments tools (like Kolla, TripleO, etc) would
> expect from OpenStack to provide (tooling in oslo.config to generate
> ConfigMap? etc.
> 
> Thanks for your help,

Based on [1] I think the idea is to write the entire config file
for the service outside of the container, upload it to the configmap,
then configure the pod to create a volume and write the configmap
contents to the volume before launching the container. It's sort of like
nova's file-injection feature.

The approach seems appealing, although I don't fully understand the
issues others have raised with adding volumes to containers.

Doug

[1] 
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#populate-a-volume-with-data-stored-in-a-configmap

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-08 Thread Fox, Kevin M
That is possible. But, a yaml/json driver might still be good, regardless of 
the mechanism used to transfer the file.

So the driver abstraction still might be useful.

Thanks,
Kevin

From: Doug Hellmann [d...@doughellmann.com]
Sent: Thursday, June 08, 2017 1:19 PM
To: openstack-dev
Subject: Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla]
[helm] Configuration management with etcd / confd

Excerpts from Fox, Kevin M's message of 2017-06-08 20:08:25 +:
> Yeah, I think k8s configmaps might be a good config mechanism for k8s based 
> openstack deployment.
>
> One feature that might help which is related to the etcd plugin would be a 
> yaml/json plugin. It would allow more native looking configmaps.

We have at least 2 mechanisms for getting config files into containers
without such significant changes to oslo.config.  At this point I'm
not sure it's necessary to do the driver work at all.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] etcd3 as base service - update

2017-06-08 Thread Fox, Kevin M
hmm... a very interesting question

I would think control plane only.

Thanks,
Kevin

From: Drew Fisher [drew.fis...@oracle.com]
Sent: Thursday, June 08, 2017 1:07 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] etcd3 as base service - update

On 6/7/17 4:47 AM, Davanum Srinivas wrote:
> Team,
>
> Here's the update to the base services resolution from the TC:
> https://governance.openstack.org/tc/reference/base-services.html
>
> First request is to Distros, Packagers, Deployers, anyone who
> installs/configures OpenStack:
> Please make sure you have latest etcd 3.x available in your
> environment for Services to use, Fedora already does, we need help in
> making sure all distros and architectures are covered.

As a Solaris OpenStack dev, I have a questions about this change.

If Solaris were to *only* run the nova-compute service, and leave the
rest of the OpenStack services to Linux, is etcd 3.x required on the
compute node for Pike+ ?

Thanks!

-Drew



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] etcd3 as base service - update

2017-06-08 Thread Julien Danjou
On Thu, Jun 08 2017, Mike Bayer wrote:

> So far I've seen a proposal of etcd3 as a replacement for memcached in
> keystone, and a new dogpile connector was added to oslo.cache to handle
> referring to etcd3 as a cache backend.  This is a really simplistic / minimal
> kind of use case for a key-store.

etcd3 is not meant to be a cache. Synchronizing caching value using the
Raft protocol sounds a bit overkill. A cluster of memcached would be
probably of a better use.

> But, keeping in mind I don't know anything about etcd3 other than "it's 
> another
> key-store", it's the only database used by Kubernetes as a whole, which
> suggests it's doing a better job than Redis in terms of "durable".

Not sure about that. And Redis has much more data structure than etcd,
that is can be faster/more efficient than etcd. But it does not have
Raft and a synchronisation protocol. Its clustering is rather poor in
comparison of etcd.

> So I wouldn't be surprised if new / existing openstack applications
> express some gravitational pull towards using it as their own
> datastore as well. I'll be trying to hang onto the etcd3 track as much
> as possible so that if/when that happens I still have a job :).

Sounds like a recipe for disaster. :)

-- 
Julien Danjou
/* Free Software hacker
   https://julien.danjou.info */


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] etcd3 as base service - update

2017-06-08 Thread Emilien Macchi
On Thu, Jun 8, 2017 at 7:34 PM, Lance Bragstad  wrote:
> After digging into etcd a bit, one place this might be help deployer
> experience would be the handling of fernet keys for token encryption in
> keystone. Currently, all keys used to encrypt and decrypt tokens are kept on
> disk for each keystone node in the deployment. While simple, it requires
> operators to perform rotation on a single node and then push, or sync, the
> new key set to the rest of the nodes. This must be done in lock step in
> order to prevent early token invalidation and inconsistent token responses.

This is what we discussed a few months ago :-)

http://lists.openstack.org/pipermail/openstack-dev/2017-March/113943.html

I'm glad it's coming back ;-)

> An alternative would be to keep the keys in etcd and make the fernet bits
> pluggable so that it's possible to read keys from disk or etcd (pending
> configuration). The advantage would be that operators could initiate key
> rotations from any keystone node in the deployment (or using etcd directly)
> and not have to worry about distributing the new key set. Since etcd
> associates metadata to the key-value pairs, we might be able to simplify the
> rotation strategy as well.
>
> On Thu, Jun 8, 2017 at 11:37 AM, Mike Bayer  wrote:
>>
>>
>>
>> On 06/08/2017 12:47 AM, Joshua Harlow wrote:
>>>
>>> So just out of curiosity, but do people really even know what etcd is
>>> good for? I am thinking that there should be some guidance from folks in the
>>> community as to where etcd should be used and where it shouldn't (otherwise
>>> we just all end up in a mess).
>>
>>
>> So far I've seen a proposal of etcd3 as a replacement for memcached in
>> keystone, and a new dogpile connector was added to oslo.cache to handle
>> referring to etcd3 as a cache backend.  This is a really simplistic /
>> minimal kind of use case for a key-store.
>>
>> But, keeping in mind I don't know anything about etcd3 other than "it's
>> another key-store", it's the only database used by Kubernetes as a whole,
>> which suggests it's doing a better job than Redis in terms of "durable".
>> So I wouldn't be surprised if new / existing openstack applications express
>> some gravitational pull towards using it as their own datastore as well.
>> I'll be trying to hang onto the etcd3 track as much as possible so that
>> if/when that happens I still have a job :).
>>
>>
>>
>>
>>>
>>> Perhaps a good idea to actually give examples of how it should be used,
>>> how it shouldn't be used, what it offers, what it doesn't... Or at least
>>> provide links for people to read up on this.
>>>
>>> Thoughts?
>>>
>>> Davanum Srinivas wrote:

 One clarification: Since https://pypi.python.org/pypi/etcd3gw just
 uses the HTTP API (/v3alpha) it will work under both eventlet and
 non-eventlet environments.

 Thanks,
 Dims


 On Wed, Jun 7, 2017 at 6:47 AM, Davanum Srinivas
 wrote:
>
> Team,
>
> Here's the update to the base services resolution from the TC:
> https://governance.openstack.org/tc/reference/base-services.html
>
> First request is to Distros, Packagers, Deployers, anyone who
> installs/configures OpenStack:
> Please make sure you have latest etcd 3.x available in your
> environment for Services to use, Fedora already does, we need help in
> making sure all distros and architectures are covered.
>
> Any project who want to use etcd v3 API via grpc, please use:
> https://pypi.python.org/pypi/etcd3 (works only for non-eventlet
> services)
>
> Those that depend on eventlet, please use the etcd3 v3alpha HTTP API
> using:
> https://pypi.python.org/pypi/etcd3gw
>
> If you use tooz, there are 2 driver choices for you:
> https://github.com/openstack/tooz/blob/master/setup.cfg#L29
> https://github.com/openstack/tooz/blob/master/setup.cfg#L30
>
> If you use oslo.cache, there is a driver for you:
> https://github.com/openstack/oslo.cache/blob/master/setup.cfg#L33
>
> Devstack installs etcd3 by default and points cinder to it:
> http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/etcd3
>
> http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/cinder#n356
>
> Review in progress for keystone to use etcd3 for caching:
> https://review.openstack.org/#/c/469621/
>
> Doug is working on proposal(s) for oslo.config to store some
> configuration in etcd3:
> https://review.openstack.org/#/c/454897/
>
> So, feel free to turn on / test with etcd3 and report issues.
>
> Thanks,
> Dims
>
> --
> Davanum Srinivas :: https://twitter.com/dims




>>>
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> 

Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-08 Thread Emilien Macchi
On Thu, Jun 8, 2017 at 8:49 PM, Doug Hellmann  wrote:
> Excerpts from Steven Dake (stdake)'s message of 2017-06-08 17:51:48 +:
>> Doug,
>>
>> In short, a configmap takes a bunch of config files, bundles them in a 
>> kubernetes object called a configmap, and then ships them to etcd.  When a 
>> pod is launched, the pod mounts the configmaps using tmpfs and the raw 
>> config files are available for use by the openstack services.
>
> That sounds like what confd does. Something puts data into one of
> several possible databases. confd takes it out and writes it to
> file(s) when the container starts. The app in the container reads
> the file(s).
>
> It sounds like configmaps would work well, too, it just doesn't
> sound like a fundamentally different solution.

Sorry for my lack of knowledge in ConfigMap but I'm trying to see how
we could bring pieces together.
Doug and I are currently investigating how oslo.config can be useful
to generate the parameters loaded by the application at startup,
without having to manage config with Puppet or Ansible.

If I understand correctly (and if not, please correct me, and maybe
propose something), we could use oslo.config to generate a portion of
ConfigMap (that can be imported in another ConfigMap iiuc) where we
would have parameters for one app.

Example with Keystone:

  apiVersion: v1
  kind: ConfigMap
  metadata:
name: keystone-config
namespace: DEFAULT
  data:
debug: true
rpc_backend: rabbit
... (parameters generated by oslo.config, and data fed by installers)

So iiuc we would give this file to k8s when deploying pods. Parameters
values would be automatically pushed into etcd, and used when
generating the configuration. Am I correct? (I need to understand if
we need to manually manage etcd key/values).

In that case, what deployments tools (like Kolla, TripleO, etc) would
expect from OpenStack to provide (tooling in oslo.config to generate
ConfigMap? etc.

Thanks for your help,

> Doug
>
>>
>> Operating on configmaps is much simpler and safer than using a different 
>> backing database for the configuration data.
>>
>> Hope the information helps.
>>
>> Ping me in #openstack-kolla if you have more questions.
>>
>> Regards
>> -steve
>>
>> -Original Message-
>> From: Doug Hellmann 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>> 
>> Date: Thursday, June 8, 2017 at 10:12 AM
>> To: openstack-dev 
>> Subject: Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] 
>>[helm] Configuration management with etcd / confd
>>
>> Excerpts from Flavio Percoco's message of 2017-06-08 18:27:51 +0200:
>> > On 08/06/17 18:23 +0200, Flavio Percoco wrote:
>> > >On 07/06/17 12:04 +0200, Bogdan Dobrelya wrote:
>> > >>On 06.06.2017 18:08, Emilien Macchi wrote:
>> > >>>Another benefit is that confd will generate a configuration file 
>> when
>> > >>>the application will start. So if etcd is down *after* the app
>> > >>>startup, it shouldn't break the service restart if we don't ask 
>> confd
>> > >>>to re-generate the config. It's good for operators who were 
>> concerned
>> > >>>about the fact the infrastructure would rely on etcd. In that case, 
>> we
>> > >>>would only need etcd at the initial deployment (and during lifecycle
>> > >>>actions like upgrades, etc).
>> > >>>
>> > >>>The downside is that in the case of containers, they would still 
>> have
>> > >>>a configuration file within the container, and the whole goal of 
>> this
>> > >>>feature was to externalize configuration data and stop having
>> > >>>configuration files.
>> > >>
>> > >>It doesn't look a strict requirement. Those configs may (and should) 
>> be
>> > >>bind-mounted into containers, as hostpath volumes. Or, am I missing
>> > >>something what *does* make embedded configs a strict requirement?..
>> > >
>> > >mmh, one thing I liked about this effort was possibility of stop 
>> bind-mounting
>> > >config files into the containers. I'd rather find a way to not need 
>> any
>> > >bindmount and have the services get their configs themselves.
>> >
>> > Probably sent too early!
>> >
>> > If we're not talking about OpenStack containers running in a COE, I 
>> guess this
>> > is fine. For k8s based deployments, I think I'd prefer having 
>> installers
>> > creating configmaps directly and use that. The reason is that 
>> depending on files
>> > that are in the host is not ideal for these scenarios. I hate this 
>> idea because
>> > it makes deployments inconsistent and I don't want that.
>> >
>> > Flavio
>> >
>>
>> I'm not sure I understand how a configmap is any different from what is
>> proposed with confd in terms of deployment-specific data being added to
>> a container before 

Re: [openstack-dev] [all] etcd3 as base service - update

2017-06-08 Thread jimi olugboyega
Hello all,

I am wondering how I can unsubscribe from this mailing list.

Regards,
Olujimi Olugboyega.

On Wed, Jun 7, 2017 at 3:47 AM, Davanum Srinivas  wrote:

> Team,
>
> Here's the update to the base services resolution from the TC:
> https://governance.openstack.org/tc/reference/base-services.html
>
> First request is to Distros, Packagers, Deployers, anyone who
> installs/configures OpenStack:
> Please make sure you have latest etcd 3.x available in your
> environment for Services to use, Fedora already does, we need help in
> making sure all distros and architectures are covered.
>
> Any project who want to use etcd v3 API via grpc, please use:
> https://pypi.python.org/pypi/etcd3 (works only for non-eventlet services)
>
> Those that depend on eventlet, please use the etcd3 v3alpha HTTP API using:
> https://pypi.python.org/pypi/etcd3gw
>
> If you use tooz, there are 2 driver choices for you:
> https://github.com/openstack/tooz/blob/master/setup.cfg#L29
> https://github.com/openstack/tooz/blob/master/setup.cfg#L30
>
> If you use oslo.cache, there is a driver for you:
> https://github.com/openstack/oslo.cache/blob/master/setup.cfg#L33
>
> Devstack installs etcd3 by default and points cinder to it:
> http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/etcd3
> http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/cinder#n356
>
> Review in progress for keystone to use etcd3 for caching:
> https://review.openstack.org/#/c/469621/
>
> Doug is working on proposal(s) for oslo.config to store some
> configuration in etcd3:
> https://review.openstack.org/#/c/454897/
>
> So, feel free to turn on / test with etcd3 and report issues.
>
> Thanks,
> Dims
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-08 Thread Doug Hellmann
Excerpts from Fox, Kevin M's message of 2017-06-08 20:08:25 +:
> Yeah, I think k8s configmaps might be a good config mechanism for k8s based 
> openstack deployment.
> 
> One feature that might help which is related to the etcd plugin would be a 
> yaml/json plugin. It would allow more native looking configmaps.

We have at least 2 mechanisms for getting config files into containers
without such significant changes to oslo.config.  At this point I'm
not sure it's necessary to do the driver work at all.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-08 Thread Fox, Kevin M
Yeah, I think k8s configmaps might be a good config mechanism for k8s based 
openstack deployment.

One feature that might help which is related to the etcd plugin would be a 
yaml/json plugin. It would allow more native looking configmaps.

Thanks,
Kevin

From: Doug Hellmann [d...@doughellmann.com]
Sent: Thursday, June 08, 2017 11:49 AM
To: openstack-dev
Subject: Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla]
[helm] Configuration management with etcd / confd

Excerpts from Steven Dake (stdake)'s message of 2017-06-08 17:51:48 +:
> Doug,
>
> In short, a configmap takes a bunch of config files, bundles them in a 
> kubernetes object called a configmap, and then ships them to etcd.  When a 
> pod is launched, the pod mounts the configmaps using tmpfs and the raw config 
> files are available for use by the openstack services.

That sounds like what confd does. Something puts data into one of
several possible databases. confd takes it out and writes it to
file(s) when the container starts. The app in the container reads
the file(s).

It sounds like configmaps would work well, too, it just doesn't
sound like a fundamentally different solution.

Doug

>
> Operating on configmaps is much simpler and safer than using a different 
> backing database for the configuration data.
>
> Hope the information helps.
>
> Ping me in #openstack-kolla if you have more questions.
>
> Regards
> -steve
>
> -Original Message-
> From: Doug Hellmann 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: Thursday, June 8, 2017 at 10:12 AM
> To: openstack-dev 
> Subject: Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla]  
>   [helm] Configuration management with etcd / confd
>
> Excerpts from Flavio Percoco's message of 2017-06-08 18:27:51 +0200:
> > On 08/06/17 18:23 +0200, Flavio Percoco wrote:
> > >On 07/06/17 12:04 +0200, Bogdan Dobrelya wrote:
> > >>On 06.06.2017 18:08, Emilien Macchi wrote:
> > >>>Another benefit is that confd will generate a configuration file when
> > >>>the application will start. So if etcd is down *after* the app
> > >>>startup, it shouldn't break the service restart if we don't ask confd
> > >>>to re-generate the config. It's good for operators who were concerned
> > >>>about the fact the infrastructure would rely on etcd. In that case, 
> we
> > >>>would only need etcd at the initial deployment (and during lifecycle
> > >>>actions like upgrades, etc).
> > >>>
> > >>>The downside is that in the case of containers, they would still have
> > >>>a configuration file within the container, and the whole goal of this
> > >>>feature was to externalize configuration data and stop having
> > >>>configuration files.
> > >>
> > >>It doesn't look a strict requirement. Those configs may (and should) 
> be
> > >>bind-mounted into containers, as hostpath volumes. Or, am I missing
> > >>something what *does* make embedded configs a strict requirement?..
> > >
> > >mmh, one thing I liked about this effort was possibility of stop 
> bind-mounting
> > >config files into the containers. I'd rather find a way to not need any
> > >bindmount and have the services get their configs themselves.
> >
> > Probably sent too early!
> >
> > If we're not talking about OpenStack containers running in a COE, I 
> guess this
> > is fine. For k8s based deployments, I think I'd prefer having installers
> > creating configmaps directly and use that. The reason is that depending 
> on files
> > that are in the host is not ideal for these scenarios. I hate this idea 
> because
> > it makes deployments inconsistent and I don't want that.
> >
> > Flavio
> >
>
> I'm not sure I understand how a configmap is any different from what is
> proposed with confd in terms of deployment-specific data being added to
> a container before it launches. Can you elaborate on that?
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [all] etcd3 as base service - update

2017-06-08 Thread Drew Fisher


On 6/7/17 4:47 AM, Davanum Srinivas wrote:
> Team,
> 
> Here's the update to the base services resolution from the TC:
> https://governance.openstack.org/tc/reference/base-services.html
> 
> First request is to Distros, Packagers, Deployers, anyone who
> installs/configures OpenStack:
> Please make sure you have latest etcd 3.x available in your
> environment for Services to use, Fedora already does, we need help in
> making sure all distros and architectures are covered.

As a Solaris OpenStack dev, I have a questions about this change.

If Solaris were to *only* run the nova-compute service, and leave the
rest of the OpenStack services to Linux, is etcd 3.x required on the
compute node for Pike+ ?

Thanks!

-Drew



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][l2gw] OVS code currently broken

2017-06-08 Thread Kevin Benton
Can you file a bug against Neutron and reference it here?

On Thu, Jun 8, 2017 at 8:28 AM, Ricardo Noriega De Soto  wrote:

> There is actually a bunch of patches waiting to be reviewed and approved.
>
> Please, we'd need core reviewers to jump in.
>
> I'd like to thank Gary for all his support and reviews.
>
> Thanks Gary!
>
> On Tue, May 30, 2017 at 3:56 PM, Gary Kotton  wrote:
>
>> Hi,
>>
>> Please note that the L2 GW code is currently broken due to the commit
>> e6333593ae6005c4b0d73d9dfda5eb47f40dd8da
>>
>> If someone has the cycles can they please take a look.
>>
>> Thanks
>>
>> gary
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Ricardo Noriega
>
> Senior Software Engineer - NFV Partner Engineer | Office of Technology  |
> Red Hat
> irc: rnoriega @freenode
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

2017-06-08 Thread Fox, Kevin M
Oh, yes please!  We've had to go through a lot of hoops to migrate ceph-mon's 
around while keeping their ip's consistent to avoid vm breakage. All the rest 
of the ceph ecosystem (at least that we've dealt with) works fine without the 
level of effort the current nova/cinder implementation imposes.

Thanks,
Kevin

From: melanie witt [melwi...@gmail.com]
Sent: Thursday, June 08, 2017 11:39 AM
To: Matt Riedemann; openstack-operat...@lists.openstack.org; 
openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Openstack-operators] [nova][cinder] Is there 
interest in an admin-api to refresh volume connection info?

On Thu, 8 Jun 2017 08:58:20 -0500, Matt Riedemann wrote:
> Nova stores the output of the Cinder os-initialize_connection info API
> in the Nova block_device_mappings table, and uses that later for making
> volume connections.
>
> This data can get out of whack or need to be refreshed, like if your
> ceph server IP changes, or you need to recycle some secret uuid for your
> ceph cluster.
>
> I think the only ways to do this on the nova side today are via volume
> detach/re-attach, reboot, migrations, etc - all of which, except live
> migration, are disruptive to the running guest.

I believe the only way to work around this currently is by doing a 'nova
shelve' followed by a 'nova unshelve'. That will end up querying the
connection_info from Cinder and update the block device mapping record
for the instance. Maybe detach/re-attach would work too but I can't
remember trying it.

> I've kicked around the idea of adding some sort of admin API interface
> for refreshing the BDM.connection_info on-demand if needed by an
> operator. Does anyone see value in this? Are operators doing stuff like
> this already, but maybe via direct DB updates?
>
> We could have something in the compute API which calls down to the
> compute for an instance and has it refresh the connection_info from
> Cinder and updates the BDM table in the nova DB. It could be an admin
> action API, or part of the os-server-external-events API, like what we
> have for the 'network-changed' event sent from Neutron which nova uses
> to refresh the network info cache.
>
> Other ideas or feedback here?

We've discussed this a few times before and we were thinking it might be
best to handle this transparently and just do a connection_info refresh
+ record update inline with the request flows that will end up reading
connection_info from the block device mapping records. That way,
operators won't have to intervene when connection_info changes.

At least in the case of Ceph, as long as a guest is running, it will
continue to work fine if the monitor IPs or secrets change because it
will continue to use its existing connection to the Ceph cluster. Things
go wrong when an instance action such as resize, stop/start, or reboot
is done because when the instance is taken offline and being brought
back up, the stale connection_info is read from the block_device_mapping
table and injected into the instance, and so it loses contact with the
cluster. If we query Cinder and update the block_device_mapping record
at the beginning of those actions, the instance will get the new
connection_info.

-melanie



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] Call for check: is your project ready for pylint 1.7.1?

2017-06-08 Thread Akihiro Motoki
Hi all,

Is your project ready for pylint 1.7.1?
If you use pylint in your pep8 job, it is worth checked.

Our current version of pylint is 1.4.5 but it is not safe in python 3.5.
The global-requirements update was merged once [1],
However, some projects (at least neutron) are not ready for pylint
1.7.1 and it was reverted [2].
it is reasonable to give individual projects time to cope with pylint 1.7.1.

I believe bumping pylint version to 1.7.1 (or later) is the right
direction in long term.
I would suggest to make your project ready for pylint 1.7.1 soon (two
weeks or some?)
You can disable new rules in pylint 1.7.1 temporarily and clean up
your code later
as neutron does [3]. As far as I checked, most rules are reasonable
and worth enabled.

Thanks,
Akihiro Motoki

[1] https://review.openstack.org/#/c/470800/
[2] https://review.openstack.org/#/c/471756/
[3] https://review.openstack.org/#/c/471763/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-08 Thread Doug Hellmann
Excerpts from Steven Dake (stdake)'s message of 2017-06-08 17:51:48 +:
> Doug,
> 
> In short, a configmap takes a bunch of config files, bundles them in a 
> kubernetes object called a configmap, and then ships them to etcd.  When a 
> pod is launched, the pod mounts the configmaps using tmpfs and the raw config 
> files are available for use by the openstack services.

That sounds like what confd does. Something puts data into one of
several possible databases. confd takes it out and writes it to
file(s) when the container starts. The app in the container reads
the file(s).

It sounds like configmaps would work well, too, it just doesn't
sound like a fundamentally different solution.

Doug

> 
> Operating on configmaps is much simpler and safer than using a different 
> backing database for the configuration data.
> 
> Hope the information helps.
> 
> Ping me in #openstack-kolla if you have more questions.
> 
> Regards
> -steve
> 
> -Original Message-
> From: Doug Hellmann 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: Thursday, June 8, 2017 at 10:12 AM
> To: openstack-dev 
> Subject: Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla]  
>   [helm] Configuration management with etcd / confd
> 
> Excerpts from Flavio Percoco's message of 2017-06-08 18:27:51 +0200:
> > On 08/06/17 18:23 +0200, Flavio Percoco wrote:
> > >On 07/06/17 12:04 +0200, Bogdan Dobrelya wrote:
> > >>On 06.06.2017 18:08, Emilien Macchi wrote:
> > >>>Another benefit is that confd will generate a configuration file when
> > >>>the application will start. So if etcd is down *after* the app
> > >>>startup, it shouldn't break the service restart if we don't ask confd
> > >>>to re-generate the config. It's good for operators who were concerned
> > >>>about the fact the infrastructure would rely on etcd. In that case, 
> we
> > >>>would only need etcd at the initial deployment (and during lifecycle
> > >>>actions like upgrades, etc).
> > >>>
> > >>>The downside is that in the case of containers, they would still have
> > >>>a configuration file within the container, and the whole goal of this
> > >>>feature was to externalize configuration data and stop having
> > >>>configuration files.
> > >>
> > >>It doesn't look a strict requirement. Those configs may (and should) 
> be
> > >>bind-mounted into containers, as hostpath volumes. Or, am I missing
> > >>something what *does* make embedded configs a strict requirement?..
> > >
> > >mmh, one thing I liked about this effort was possibility of stop 
> bind-mounting
> > >config files into the containers. I'd rather find a way to not need any
> > >bindmount and have the services get their configs themselves.
> > 
> > Probably sent too early!
> > 
> > If we're not talking about OpenStack containers running in a COE, I 
> guess this
> > is fine. For k8s based deployments, I think I'd prefer having installers
> > creating configmaps directly and use that. The reason is that depending 
> on files
> > that are in the host is not ideal for these scenarios. I hate this idea 
> because
> > it makes deployments inconsistent and I don't want that.
> > 
> > Flavio
> > 
> 
> I'm not sure I understand how a configmap is any different from what is
> proposed with confd in terms of deployment-specific data being added to
> a container before it launches. Can you elaborate on that?
> 
> Doug
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

2017-06-08 Thread melanie witt

On Thu, 8 Jun 2017 08:58:20 -0500, Matt Riedemann wrote:
Nova stores the output of the Cinder os-initialize_connection info API 
in the Nova block_device_mappings table, and uses that later for making 
volume connections.


This data can get out of whack or need to be refreshed, like if your 
ceph server IP changes, or you need to recycle some secret uuid for your 
ceph cluster.


I think the only ways to do this on the nova side today are via volume 
detach/re-attach, reboot, migrations, etc - all of which, except live 
migration, are disruptive to the running guest.


I believe the only way to work around this currently is by doing a 'nova 
shelve' followed by a 'nova unshelve'. That will end up querying the 
connection_info from Cinder and update the block device mapping record 
for the instance. Maybe detach/re-attach would work too but I can't 
remember trying it.


I've kicked around the idea of adding some sort of admin API interface 
for refreshing the BDM.connection_info on-demand if needed by an 
operator. Does anyone see value in this? Are operators doing stuff like 
this already, but maybe via direct DB updates?


We could have something in the compute API which calls down to the 
compute for an instance and has it refresh the connection_info from 
Cinder and updates the BDM table in the nova DB. It could be an admin 
action API, or part of the os-server-external-events API, like what we 
have for the 'network-changed' event sent from Neutron which nova uses 
to refresh the network info cache.


Other ideas or feedback here?


We've discussed this a few times before and we were thinking it might be 
best to handle this transparently and just do a connection_info refresh 
+ record update inline with the request flows that will end up reading 
connection_info from the block device mapping records. That way, 
operators won't have to intervene when connection_info changes.


At least in the case of Ceph, as long as a guest is running, it will 
continue to work fine if the monitor IPs or secrets change because it 
will continue to use its existing connection to the Ceph cluster. Things 
go wrong when an instance action such as resize, stop/start, or reboot 
is done because when the instance is taken offline and being brought 
back up, the stale connection_info is read from the block_device_mapping 
table and injected into the instance, and so it loses contact with the 
cluster. If we query Cinder and update the block_device_mapping record 
at the beginning of those actions, the instance will get the new 
connection_info.


-melanie



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] Goodbye^W See you later

2017-06-08 Thread Christopher.Dearborn
Congrats on your new gig Jim, and best of luck!

Chris D

From: Jim Rollenhagen [mailto:j...@jimrollenhagen.com]
Sent: Thursday, June 8, 2017 8:45 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [ironic][nova] Goodbye^W See you later

Hey friends,

I've been mostly missing for the past six weeks while looking for a new job, so 
maybe you've forgotten me already, maybe not. I'm happy to tell you I've found 
one that I think is a great opportunity for me. But, I'm sad to tell you that 
it's totally outside of the OpenStack community.

The last 3.5 years have been amazing. I'm extremely grateful that I've been 
able to work in this community - I've learned so much and met so many awesome 
people. I'm going to miss the insane(ly awesome) level of collaboration, the 
summits, the PTGs, and even some of the bikeshedding. We've built amazing 
things together, and I'm sure y'all will continue to do so without me.

I'll still be lurking in #openstack-dev and #openstack-ironic for a while, if 
people need me to drop a -2 or dictate old knowledge or whatever, feel free to 
ping me. Or if you just want to chat. :)

<3 jroll

P.S. obviously my core permissions should be dropped now :P
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] Goodbye^W See you later

2017-06-08 Thread Navare, Anup D
Jim,

Wish you good luck for your new venture.

-Anup

From: Jim Rollenhagen [mailto:j...@jimrollenhagen.com]
Sent: Thursday, June 8, 2017 5:45 AM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [ironic][nova] Goodbye^W See you later

Hey friends,

I've been mostly missing for the past six weeks while looking for a new job, so 
maybe you've forgotten me already, maybe not. I'm happy to tell you I've found 
one that I think is a great opportunity for me. But, I'm sad to tell you that 
it's totally outside of the OpenStack community.

The last 3.5 years have been amazing. I'm extremely grateful that I've been 
able to work in this community - I've learned so much and met so many awesome 
people. I'm going to miss the insane(ly awesome) level of collaboration, the 
summits, the PTGs, and even some of the bikeshedding. We've built amazing 
things together, and I'm sure y'all will continue to do so without me.

I'll still be lurking in #openstack-dev and #openstack-ironic for a while, if 
people need me to drop a -2 or dictate old knowledge or whatever, feel free to 
ping me. Or if you just want to chat. :)

<3 jroll

P.S. obviously my core permissions should be dropped now :P
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-08 Thread Michał Jastrzębski
On 8 June 2017 at 09:50, Michał Jastrzębski  wrote:
> On 8 June 2017 at 09:27, Flavio Percoco  wrote:
>> On 08/06/17 18:23 +0200, Flavio Percoco wrote:
>>>
>>> On 07/06/17 12:04 +0200, Bogdan Dobrelya wrote:

 On 06.06.2017 18:08, Emilien Macchi wrote:
>
> Another benefit is that confd will generate a configuration file when
> the application will start. So if etcd is down *after* the app
> startup, it shouldn't break the service restart if we don't ask confd
> to re-generate the config. It's good for operators who were concerned
> about the fact the infrastructure would rely on etcd. In that case, we
> would only need etcd at the initial deployment (and during lifecycle
> actions like upgrades, etc).
>
> The downside is that in the case of containers, they would still have
> a configuration file within the container, and the whole goal of this
> feature was to externalize configuration data and stop having
> configuration files.


 It doesn't look a strict requirement. Those configs may (and should) be
 bind-mounted into containers, as hostpath volumes. Or, am I missing
 something what *does* make embedded configs a strict requirement?..
>>>
>>>
>>> mmh, one thing I liked about this effort was possibility of stop
>>> bind-mounting
>>> config files into the containers. I'd rather find a way to not need any
>>> bindmount and have the services get their configs themselves.
>>
>>
>> Probably sent too early!
>>
>> If we're not talking about OpenStack containers running in a COE, I guess
>> this
>> is fine. For k8s based deployments, I think I'd prefer having installers
>> creating configmaps directly and use that. The reason is that depending on
>> files
>> that are in the host is not ideal for these scenarios. I hate this idea
>> because
>> it makes deployments inconsistent and I don't want that.
>
> Well, I disagree. If we're doing this we're essentially getting rid of
> "files" at all. It might actually be easier to handle from COE than
> configmap, as configmap has to be generated and when you get to host
> specific things it's quite a pain to handle. I, for one, would happily
> use cantral DB for config options if we define schema correctly.
>
> That being said defining schema correctly is quite a challenge. Few
> hard cases I see right now can be found in single use case - PCI
> Passthrough
>
> 1. I have multiple PCI devices in host, I need to specify list of them
> 2. PCI buses differes host to host, I need to specify groups of hosts
> that will share same bus configuration and reflect that in service
> config
>
> Maybe we should gather few of hard use cases like that and make sure
> we can address them in our config schema?

Speaking of hard use cases: here's another - config rolling upgrade +
config rollback. If we have single option in etcd, when service
restarts it automatically gets new config which creates funny edge
cases when you want to do rolling upgrade of config and some other
node fails->service restarts->config gets updated "accidentally".

>>
>> Flavio
>>
>> --
>> @flaper87
>> Flavio Percoco
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] etcd3 as base service - update

2017-06-08 Thread Fox, Kevin M
So, one thing to remember, I don't think etcd has an authz mechanism yet.

You usually want your fernet keys to be accessible by just the keystone nodes 
and no others.

This might require a etcd cluster just for keystone fernet tokens, which might 
work great. but is an operator overhead to install/maintain.


From: Lance Bragstad [lbrags...@gmail.com]
Sent: Thursday, June 08, 2017 10:34 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] etcd3 as base service - update

After digging into etcd a bit, one place this might be help deployer experience 
would be the handling of fernet keys for token encryption in keystone. 
Currently, all keys used to encrypt and decrypt tokens are kept on disk for 
each keystone node in the deployment. While simple, it requires operators to 
perform rotation on a single node and then push, or sync, the new key set to 
the rest of the nodes. This must be done in lock step in order to prevent early 
token invalidation and inconsistent token responses.

An alternative would be to keep the keys in etcd and make the fernet bits 
pluggable so that it's possible to read keys from disk or etcd (pending 
configuration). The advantage would be that operators could initiate key 
rotations from any keystone node in the deployment (or using etcd directly) and 
not have to worry about distributing the new key set. Since etcd associates 
metadata to the key-value pairs, we might be able to simplify the rotation 
strategy as well.

On Thu, Jun 8, 2017 at 11:37 AM, Mike Bayer 
> wrote:


On 06/08/2017 12:47 AM, Joshua Harlow wrote:
So just out of curiosity, but do people really even know what etcd is good for? 
I am thinking that there should be some guidance from folks in the community as 
to where etcd should be used and where it shouldn't (otherwise we just all end 
up in a mess).

So far I've seen a proposal of etcd3 as a replacement for memcached in 
keystone, and a new dogpile connector was added to oslo.cache to handle 
referring to etcd3 as a cache backend.  This is a really simplistic / minimal 
kind of use case for a key-store.

But, keeping in mind I don't know anything about etcd3 other than "it's another 
key-store", it's the only database used by Kubernetes as a whole, which 
suggests it's doing a better job than Redis in terms of "durable".   So I 
wouldn't be surprised if new / existing openstack applications express some 
gravitational pull towards using it as their own datastore as well.I'll be 
trying to hang onto the etcd3 track as much as possible so that if/when that 
happens I still have a job :).





Perhaps a good idea to actually give examples of how it should be used, how it 
shouldn't be used, what it offers, what it doesn't... Or at least provide links 
for people to read up on this.

Thoughts?

Davanum Srinivas wrote:
One clarification: Since https://pypi.python.org/pypi/etcd3gw just
uses the HTTP API (/v3alpha) it will work under both eventlet and
non-eventlet environments.

Thanks,
Dims


On Wed, Jun 7, 2017 at 6:47 AM, Davanum 
Srinivas>  wrote:
Team,

Here's the update to the base services resolution from the TC:
https://governance.openstack.org/tc/reference/base-services.html

First request is to Distros, Packagers, Deployers, anyone who
installs/configures OpenStack:
Please make sure you have latest etcd 3.x available in your
environment for Services to use, Fedora already does, we need help in
making sure all distros and architectures are covered.

Any project who want to use etcd v3 API via grpc, please use:
https://pypi.python.org/pypi/etcd3 (works only for non-eventlet services)

Those that depend on eventlet, please use the etcd3 v3alpha HTTP API using:
https://pypi.python.org/pypi/etcd3gw

If you use tooz, there are 2 driver choices for you:
https://github.com/openstack/tooz/blob/master/setup.cfg#L29
https://github.com/openstack/tooz/blob/master/setup.cfg#L30

If you use oslo.cache, there is a driver for you:
https://github.com/openstack/oslo.cache/blob/master/setup.cfg#L33

Devstack installs etcd3 by default and points cinder to it:
http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/etcd3
http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/cinder#n356

Review in progress for keystone to use etcd3 for caching:
https://review.openstack.org/#/c/469621/

Doug is working on proposal(s) for oslo.config to store some
configuration in etcd3:
https://review.openstack.org/#/c/454897/

So, feel free to turn on / test with etcd3 and report issues.

Thanks,
Dims

--
Davanum Srinivas :: https://twitter.com/dims




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [forum] Future of Stackalytics

2017-06-08 Thread Jeremy Stanley
On 2017-06-08 09:49:03 -0700 (-0700), Ken'ichi Ohmichi wrote:
> 2017-06-08 7:19 GMT-07:00 Jeremy Stanley :
[...]
> > There is a foundation member directory API now which provides
> > affiliation details and history, so if it were my project (it's
> > not though) I'd switch to querying that and delete all the
> > static affiliation mapping out of that config instead. Not only
> > would it significantly reduce the reviewer load for
> > Stackalytics, but it would also provide a greater incentive for
> > contributors to keep their affiliation data updated in the
> > foundation member directory.
> 
> Interesting idea, thanks. It would be nice to centralize such
> information into a single place. Can I know the detail of the API?
> I'd like to take a look for some prototyping.

It only _just_ rolled to production at
https://openstackid-resources.openstack.org/api/public/v1/members
yesterday so I don't know how stable it should be considered at this
particular moment. The implementation is at
https://git.openstack.org/cgit/openstack-infra/openstackid-resources/tree/app/Models/Foundation/Main/Member.php
 >
but details haven't been added to the API documentation in that repo
yet. (I also just now realized we haven't added a publishing job for
those API docs either, so I'm working on that bit immediately.)

The relevant GET parameters for this case are
filter=email==someb...@example.com and relations=all_affiliations
which gets you a list under the "affiliations" key with all
start/end dates and organizations for the member associated with
that address. This of course presumes contributors update their
foundation profiles to include any E-mail addresses they use with
Git, as well as recording appropriate affiliation timeframes. Those
fields in the member directory profiles have existed for quite a few
years now, so hopefully at least some of us have already done that.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-08 Thread Steven Dake (stdake)
Doug,

In short, a configmap takes a bunch of config files, bundles them in a 
kubernetes object called a configmap, and then ships them to etcd.  When a pod 
is launched, the pod mounts the configmaps using tmpfs and the raw config files 
are available for use by the openstack services.

Operating on configmaps is much simpler and safer than using a different 
backing database for the configuration data.

Hope the information helps.

Ping me in #openstack-kolla if you have more questions.

Regards
-steve


-Original Message-
From: Doug Hellmann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, June 8, 2017 at 10:12 AM
To: openstack-dev 
Subject: Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla]
[helm] Configuration management with etcd / confd

Excerpts from Flavio Percoco's message of 2017-06-08 18:27:51 +0200:
> On 08/06/17 18:23 +0200, Flavio Percoco wrote:
> >On 07/06/17 12:04 +0200, Bogdan Dobrelya wrote:
> >>On 06.06.2017 18:08, Emilien Macchi wrote:
> >>>Another benefit is that confd will generate a configuration file when
> >>>the application will start. So if etcd is down *after* the app
> >>>startup, it shouldn't break the service restart if we don't ask confd
> >>>to re-generate the config. It's good for operators who were concerned
> >>>about the fact the infrastructure would rely on etcd. In that case, we
> >>>would only need etcd at the initial deployment (and during lifecycle
> >>>actions like upgrades, etc).
> >>>
> >>>The downside is that in the case of containers, they would still have
> >>>a configuration file within the container, and the whole goal of this
> >>>feature was to externalize configuration data and stop having
> >>>configuration files.
> >>
> >>It doesn't look a strict requirement. Those configs may (and should) be
> >>bind-mounted into containers, as hostpath volumes. Or, am I missing
> >>something what *does* make embedded configs a strict requirement?..
> >
> >mmh, one thing I liked about this effort was possibility of stop 
bind-mounting
> >config files into the containers. I'd rather find a way to not need any
> >bindmount and have the services get their configs themselves.
> 
> Probably sent too early!
> 
> If we're not talking about OpenStack containers running in a COE, I guess 
this
> is fine. For k8s based deployments, I think I'd prefer having installers
> creating configmaps directly and use that. The reason is that depending 
on files
> that are in the host is not ideal for these scenarios. I hate this idea 
because
> it makes deployments inconsistent and I don't want that.
> 
> Flavio
> 

I'm not sure I understand how a configmap is any different from what is
proposed with confd in terms of deployment-specific data being added to
a container before it launches. Can you elaborate on that?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-08 Thread Steven Dake (stdake)
Flavio,

Atleast for the kubernetes variant of kolla, bindmounting will always be used 
as this is fundamentally how configmaps operate.  In order to maintain maximum 
flexilbility and compatibility with kubernetes, I am not keen to try a 
non-configmap way of doing things.

Regards
-steve

-Original Message-
From: Flavio Percoco 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Thursday, June 8, 2017 at 9:23 AM
To: "OpenStack Development Mailing List (not for usage questions)" 

Subject: Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] 
[helm] Configuration management with etcd / confd

On 07/06/17 12:04 +0200, Bogdan Dobrelya wrote:
>On 06.06.2017 18:08, Emilien Macchi wrote:
>> Another benefit is that confd will generate a configuration file when
>> the application will start. So if etcd is down *after* the app
>> startup, it shouldn't break the service restart if we don't ask confd
>> to re-generate the config. It's good for operators who were concerned
>> about the fact the infrastructure would rely on etcd. In that case, we
>> would only need etcd at the initial deployment (and during lifecycle
>> actions like upgrades, etc).
>>
>> The downside is that in the case of containers, they would still have
>> a configuration file within the container, and the whole goal of this
>> feature was to externalize configuration data and stop having
>> configuration files.
>
>It doesn't look a strict requirement. Those configs may (and should) be
>bind-mounted into containers, as hostpath volumes. Or, am I missing
>something what *does* make embedded configs a strict requirement?..

mmh, one thing I liked about this effort was possibility of stop 
bind-mounting
config files into the containers. I'd rather find a way to not need any
bindmount and have the services get their configs themselves.

Flavio


-- 
@flaper87
Flavio Percoco


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] etcd3 as base service - update

2017-06-08 Thread Lance Bragstad
After digging into etcd a bit, one place this might be help deployer
experience would be the handling of fernet keys for token encryption in
keystone. Currently, all keys used to encrypt and decrypt tokens are kept
on disk for each keystone node in the deployment. While simple, it requires
operators to perform rotation on a single node and then push, or sync, the
new key set to the rest of the nodes. This must be done in lock step in
order to prevent early token invalidation and inconsistent token responses.

An alternative would be to keep the keys in etcd and make the fernet bits
pluggable so that it's possible to read keys from disk or etcd (pending
configuration). The advantage would be that operators could initiate key
rotations from any keystone node in the deployment (or using etcd directly)
and not have to worry about distributing the new key set. Since etcd
associates metadata to the key-value pairs, we might be able to simplify
the rotation strategy as well.

On Thu, Jun 8, 2017 at 11:37 AM, Mike Bayer  wrote:

>
>
> On 06/08/2017 12:47 AM, Joshua Harlow wrote:
>
>> So just out of curiosity, but do people really even know what etcd is
>> good for? I am thinking that there should be some guidance from folks in
>> the community as to where etcd should be used and where it shouldn't
>> (otherwise we just all end up in a mess).
>>
>
> So far I've seen a proposal of etcd3 as a replacement for memcached in
> keystone, and a new dogpile connector was added to oslo.cache to handle
> referring to etcd3 as a cache backend.  This is a really simplistic /
> minimal kind of use case for a key-store.
>
> But, keeping in mind I don't know anything about etcd3 other than "it's
> another key-store", it's the only database used by Kubernetes as a whole,
> which suggests it's doing a better job than Redis in terms of "durable".
>  So I wouldn't be surprised if new / existing openstack applications
> express some gravitational pull towards using it as their own datastore as
> well.I'll be trying to hang onto the etcd3 track as much as possible so
> that if/when that happens I still have a job :).
>
>
>
>
>
>> Perhaps a good idea to actually give examples of how it should be used,
>> how it shouldn't be used, what it offers, what it doesn't... Or at least
>> provide links for people to read up on this.
>>
>> Thoughts?
>>
>> Davanum Srinivas wrote:
>>
>>> One clarification: Since https://pypi.python.org/pypi/etcd3gw just
>>> uses the HTTP API (/v3alpha) it will work under both eventlet and
>>> non-eventlet environments.
>>>
>>> Thanks,
>>> Dims
>>>
>>>
>>> On Wed, Jun 7, 2017 at 6:47 AM, Davanum Srinivas
>>> wrote:
>>>
 Team,

 Here's the update to the base services resolution from the TC:
 https://governance.openstack.org/tc/reference/base-services.html

 First request is to Distros, Packagers, Deployers, anyone who
 installs/configures OpenStack:
 Please make sure you have latest etcd 3.x available in your
 environment for Services to use, Fedora already does, we need help in
 making sure all distros and architectures are covered.

 Any project who want to use etcd v3 API via grpc, please use:
 https://pypi.python.org/pypi/etcd3 (works only for non-eventlet
 services)

 Those that depend on eventlet, please use the etcd3 v3alpha HTTP API
 using:
 https://pypi.python.org/pypi/etcd3gw

 If you use tooz, there are 2 driver choices for you:
 https://github.com/openstack/tooz/blob/master/setup.cfg#L29
 https://github.com/openstack/tooz/blob/master/setup.cfg#L30

 If you use oslo.cache, there is a driver for you:
 https://github.com/openstack/oslo.cache/blob/master/setup.cfg#L33

 Devstack installs etcd3 by default and points cinder to it:
 http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/etcd3
 http://git.openstack.org/cgit/openstack-dev/devstack/tree/li
 b/cinder#n356

 Review in progress for keystone to use etcd3 for caching:
 https://review.openstack.org/#/c/469621/

 Doug is working on proposal(s) for oslo.config to store some
 configuration in etcd3:
 https://review.openstack.org/#/c/454897/

 So, feel free to turn on / test with etcd3 and report issues.

 Thanks,
 Dims

 --
 Davanum Srinivas :: https://twitter.com/dims

>>>
>>>
>>>
>>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> 

Re: [openstack-dev] [oslo.messaging]Optimize RPC performance by reusing callback queue

2017-06-08 Thread Ken Giusti
Hi,

Keep in mind the rabbit driver creates a single reply queue per *transport*
- that is per call to oslo.messaging's
get_transport/get_rpc_transport/get_notification_transport.

If you have multiple RPCClients sharing the same transport, then all
clients issuing RPC calls over that transport will use the same reply queue
(and multiplex incoming replies using a unique id in the reply itself).
See
https://git.openstack.org/cgit/openstack/oslo.messaging/tree/oslo_messaging/_drivers/amqpdriver.py?h=stable/newton#n452
for all the details.

But it cannot share the reply queue across transports - and certainly not
across processes :)

-K



On Wed, Jun 7, 2017 at 10:29 PM, int32bit  wrote:

> Hi,
>
> Currently, I find our RPC client always need create a new callback queue
> for every call requests to track the reply belongs, at least in Newton.
> That's pretty inefficient and lead to poor performance. I also  find some
> RPC implementations no need to create a new queue, they track the request
> and response by correlation id in message header(rabbitmq well supports,
> not sure is it AMQP standard?). The rabbitmq official document provide a
> simple demo, see [1].
>
> So I am confused that why our oslo.messaging doesn't use this way
> to optimize RPC performance. Is it for any consideration or do I miss
> some potential cases?
>
> Thanks for any reply and discussion!
>
>
> [1] https://www.rabbitmq.com/tutorials/tutorial-six-python.html.
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Ken Giusti  (kgiu...@gmail.com)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-08 Thread Doug Hellmann
Excerpts from Flavio Percoco's message of 2017-06-08 18:27:51 +0200:
> On 08/06/17 18:23 +0200, Flavio Percoco wrote:
> >On 07/06/17 12:04 +0200, Bogdan Dobrelya wrote:
> >>On 06.06.2017 18:08, Emilien Macchi wrote:
> >>>Another benefit is that confd will generate a configuration file when
> >>>the application will start. So if etcd is down *after* the app
> >>>startup, it shouldn't break the service restart if we don't ask confd
> >>>to re-generate the config. It's good for operators who were concerned
> >>>about the fact the infrastructure would rely on etcd. In that case, we
> >>>would only need etcd at the initial deployment (and during lifecycle
> >>>actions like upgrades, etc).
> >>>
> >>>The downside is that in the case of containers, they would still have
> >>>a configuration file within the container, and the whole goal of this
> >>>feature was to externalize configuration data and stop having
> >>>configuration files.
> >>
> >>It doesn't look a strict requirement. Those configs may (and should) be
> >>bind-mounted into containers, as hostpath volumes. Or, am I missing
> >>something what *does* make embedded configs a strict requirement?..
> >
> >mmh, one thing I liked about this effort was possibility of stop 
> >bind-mounting
> >config files into the containers. I'd rather find a way to not need any
> >bindmount and have the services get their configs themselves.
> 
> Probably sent too early!
> 
> If we're not talking about OpenStack containers running in a COE, I guess this
> is fine. For k8s based deployments, I think I'd prefer having installers
> creating configmaps directly and use that. The reason is that depending on 
> files
> that are in the host is not ideal for these scenarios. I hate this idea 
> because
> it makes deployments inconsistent and I don't want that.
> 
> Flavio
> 

I'm not sure I understand how a configmap is any different from what is
proposed with confd in terms of deployment-specific data being added to
a container before it launches. Can you elaborate on that?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

2017-06-08 Thread Arne Wiebalck

> On 08 Jun 2017, at 17:52, Matt Riedemann  wrote:
> 
> On 6/8/2017 10:17 AM, Arne Wiebalck wrote:
>>> On 08 Jun 2017, at 15:58, Matt Riedemann >> > wrote:
>>> 
>>> Nova stores the output of the Cinder os-initialize_connection info API in 
>>> the Nova block_device_mappings table, and uses that later for making volume 
>>> connections.
>>> 
>>> This data can get out of whack or need to be refreshed, like if your ceph 
>>> server IP changes, or you need to recycle some secret uuid for your ceph 
>>> cluster.
>>> 
>>> I think the only ways to do this on the nova side today are via volume 
>>> detach/re-attach, reboot, migrations, etc - all of which, except live 
>>> migration, are disruptive to the running guest.
>>> 
>>> I've kicked around the idea of adding some sort of admin API interface for 
>>> refreshing the BDM.connection_info on-demand if needed by an operator. Does 
>>> anyone see value in this? Are operators doing stuff like this already, but 
>>> maybe via direct DB updates?
>>> 
>>> We could have something in the compute API which calls down to the compute 
>>> for an instance and has it refresh the connection_info from Cinder and 
>>> updates the BDM table in the nova DB. It could be an admin action API, or 
>>> part of the os-server-external-events API, like what we have for the 
>>> 'network-changed' event sent from Neutron which nova uses to refresh the 
>>> network info cache.
>>> 
>>> Other ideas or feedback here?
>> I have opened https://bugs.launchpad.net/cinder/+bug/1452641 for this issue 
>> some time ago.
>> Back then I was more thinking of using an alias and not deal with IP 
>> addresses directly. From
>> what I understand, this should work with Ceph. In any case, there is still 
>> interest in a fix :-)
>> Cheers,
>>  Arne
>> --
>> Arne Wiebalck
>> CERN IT
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> Yeah this was also discussed in the dev mailing list over a year ago:
> 
> http://lists.openstack.org/pipermail/openstack-dev/2016-May/095170.html 
> 
> 
> At that time I was opposed to a REST API for a *user* doing this, but I'm 
> more open to an *admin* (by default) doing this. Also, if it were initiated 
> via the volume API then Cinder could call the Nova os-server-external-events 
> API which is admin-only by default and then Nova can do a refresh.
> 
> Later in that thread Melanie Witt also has an idea about doing a refresh in a 
> periodic task on the compute service, like we do for refreshing the instance 
> network info cache with Neutron in a periodic task.

Wouldn’t using a mon alias (and not resolving it to the respective IP 
addresses) be enough? Or is that too backend specific?

The idea of a periodic task leveraging existing techniques sounds really nice, 
but if the overhead is regarded as too much (in the end, the IP addresses 
shouldn’t change that often), an admin only API to be called when the addresses 
need to be updated sounds good to me as well.

Cheers,
 Arne

—
Arne Wiebalck
CERN IT__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [ci] Adding idempotency job on overcloud deployment.

2017-06-08 Thread Ben Nemec



On 06/08/2017 10:16 AM, Emilien Macchi wrote:

On Thu, Jun 8, 2017 at 1:47 PM, Sofer Athlan-Guyot  wrote:

Hi,

Alex Schultz  writes:


On Wed, Jun 7, 2017 at 5:20 AM, Sofer Athlan-Guyot  wrote:

Hi,

Emilien Macchi  writes:


On Wed, Jun 7, 2017 at 12:45 PM, Sofer Athlan-Guyot  wrote:

Hi,

I don't think we have such a job in place.  Basically that would check
that re-running the "openstack deploy ..." command won't do anything.


I've had a look at openstack-infra/tripleo-ci.  Should I test it in with
ovb/quickstart or tripleo.sh.  Both way are fine by me, but I may be
lacking context about which one is more relevant.


We had such an error by the past[1], but I'm not sure this has been
captured by an associated job.

WDYT ?


It would be interesting to measure how much time does it take to run
it again.


Could you point out how such an experiment could be done ?


If it's short, we could add it to all our scenarios + ovb
jobs.  If it's long, maybe we need an additional job, but it would
take more resources, so maybe we could run it in periodic pipeline
(note that periodic jobs are not optimal since we could break
something quite easily).


Just adding as context that the issue was already raised[1].  Beside
time constraint, it was pointed out that we would also need to parse the
log to find out if anything was restarted.  But it could be a second
step.  For parsing, this code was pointed out[2].



There's a few things that would need to be enabled in order to reuse
some of this work.  We'll need to add the ability to generate a report
on the puppet run[0]. And then we'll need to be able to capture it[1]
somewhere that we could then use that parsing code on.  From there,
just rerunning the installation would be a simple start to the
idempotency check.  In fuel, we had hacked in a special flag[2] that
we used in testing to actually rerun the task immediately to find when
a specific task was not idempotent in addition to also rerunning the
entire deployment. For tripleo a similar concept would be to rerun the
steps twice but that's usually not where the issues crop us for us. So
rerunning the entire installation deployment would be better as we
tend to have issues with configuration items between steps
conflicting.


Maybe we could go with something equivalent to:

  ts="$(date '+%F %T')"
  ... re-run deploy command ...

  sudo journalctl --since="${ts}" | egrep 'Stopping|Starting' | grep -v 
'user.*slice' > restarted.log
  wc -l restarted.log

This should be 0 on every overcloud nodes.

This is simpler to implement and should catch any unwanted service
restart.

WDYT ?


It's smart, for services. It doesn't cover configuration files changes
and other resources managed by Puppet, like Keystone resources, etc.
But it's an excellent start to me.


I just want to point out that the updates job is already doing this when 
it runs in every repo except tripleo-heat-templates (that's the only 
package we actually update in the updates job, every other project is a 
noop).  I can also tell you how long it takes to redo a deployment with 
no changes: just under 2000 seconds, or around 33 minutes.  At least 
that's the current average in tripleo-ci right now (although I see we 
just added around 100 seconds to the update time in the last day or two. 
*sigh*).






Thanks,
-Alex

[0] https://review.openstack.org/#/c/273740/4/mcagents/puppetd.rb@204
[1] https://review.openstack.org/#/c/273740/4/mcagents/puppetd.rb@102
[2] https://review.openstack.org/#/c/273737/


[1] http://lists.openstack.org/pipermail/openstack-dev/2017-March/114836.html
[2] 
https://review.openstack.org/#/c/279271/9/fuelweb_test/helpers/astute_log_parser.py@212




[1] https://bugs.launchpad.net/tripleo/+bug/1664650


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][api] POST /api-wg/news

2017-06-08 Thread Chris Dent


Greetings OpenStack community,

Today's meeting was mostly devoted to two topics: Which of Monty's several 
patches were ready for freeze and some naming issues with raising the minimum 
microversion.

We decided three of Monty's patches are ready, they are listed below in the 
"Freeze" section.

The naming issue related to an optional field we want to add to the microversion discovery 
document. Some projects wish to be able to signal that they are intending to raise the minimum 
microversion at a point in the future. The name for the next minimum version is fairly clear: 
"next_min_version". What's less clear is the name which can be used for the field that 
states the earliest date at which this will happen. This cannot be a definitive date because 
different deployments will release the new code at different times. We can only say "it will 
be no earlier than this time".

Naming this field has proven difficult. The original was "not_before", but that has no 
association with "min_version" so is potentially confusing. However, people who know how 
to parse the doc will know what it means so it may not matter. As always, naming is hard, so we 
seek input from the community to help us find a suitable name. This is something we don't want to 
ever have to change, so it needs to be correct from the start. Candidates include:

* not_before
* not_raise_min_before
* min_raise_not_before
* earliest_min_raise_date
* min_version_eol_date
* next_min_version_effective_date

If you have an opinion on any of these, or a better suggestion please let us know, 
either on the review at , or in 
response to this message.

# Newly Published Guidelines

Nothing new at this time.

# API Guidelines Proposed for Freeze

Guidelines that are ready for wider review by the whole community.

* Add guideline about consuming endpoints from catalog
  https://review.openstack.org/#/c/462814/

* Add support for historical service type aliases
  https://review.openstack.org/#/c/460654/

* Describe the publication of service-types-authority data
  https://review.openstack.org/#/c/462815/

# Guidelines Currently Under Review [3]

* Microversions: add next_min_version field in version body
  https://review.openstack.org/#/c/446138/

* A suite of several documents about doing version discovery
  Start at https://review.openstack.org/#/c/459405/

* WIP: microversion architecture archival doc (very early; not yet ready for 
review)
  https://review.openstack.org/444892

# Highlighting your API impacting issues

If you seek further review and insight from the API WG, please address your concerns in 
an email to the OpenStack developer mailing list[1] with the tag "[api]" in the 
subject. In your email, you should include any relevant reviews, links, and comments to 
help guide the discussion of the specific challenge you are facing.

To learn more about the API WG mission and the work we do, see OpenStack API 
Working Group [2].

Thanks for reading and see you next week!

# References

[1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[2] http://specs.openstack.org/openstack/api-wg/
[3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z
[4] Start at https://review.openstack.org/#/c/462814/
[5] https://review.openstack.org/#/c/446138/


Meeting Agenda
https://wiki.openstack.org/wiki/Meetings/API-WG#Agenda
Past Meeting Records
http://eavesdrop.openstack.org/meetings/api_wg/
Open Bugs
https://bugs.launchpad.net/openstack-api-wg

--
Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
freenode: cdent tw: @anticdent__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-08 Thread Michał Jastrzębski
On 8 June 2017 at 09:27, Flavio Percoco  wrote:
> On 08/06/17 18:23 +0200, Flavio Percoco wrote:
>>
>> On 07/06/17 12:04 +0200, Bogdan Dobrelya wrote:
>>>
>>> On 06.06.2017 18:08, Emilien Macchi wrote:

 Another benefit is that confd will generate a configuration file when
 the application will start. So if etcd is down *after* the app
 startup, it shouldn't break the service restart if we don't ask confd
 to re-generate the config. It's good for operators who were concerned
 about the fact the infrastructure would rely on etcd. In that case, we
 would only need etcd at the initial deployment (and during lifecycle
 actions like upgrades, etc).

 The downside is that in the case of containers, they would still have
 a configuration file within the container, and the whole goal of this
 feature was to externalize configuration data and stop having
 configuration files.
>>>
>>>
>>> It doesn't look a strict requirement. Those configs may (and should) be
>>> bind-mounted into containers, as hostpath volumes. Or, am I missing
>>> something what *does* make embedded configs a strict requirement?..
>>
>>
>> mmh, one thing I liked about this effort was possibility of stop
>> bind-mounting
>> config files into the containers. I'd rather find a way to not need any
>> bindmount and have the services get their configs themselves.
>
>
> Probably sent too early!
>
> If we're not talking about OpenStack containers running in a COE, I guess
> this
> is fine. For k8s based deployments, I think I'd prefer having installers
> creating configmaps directly and use that. The reason is that depending on
> files
> that are in the host is not ideal for these scenarios. I hate this idea
> because
> it makes deployments inconsistent and I don't want that.

Well, I disagree. If we're doing this we're essentially getting rid of
"files" at all. It might actually be easier to handle from COE than
configmap, as configmap has to be generated and when you get to host
specific things it's quite a pain to handle. I, for one, would happily
use cantral DB for config options if we define schema correctly.

That being said defining schema correctly is quite a challenge. Few
hard cases I see right now can be found in single use case - PCI
Passthrough

1. I have multiple PCI devices in host, I need to specify list of them
2. PCI buses differes host to host, I need to specify groups of hosts
that will share same bus configuration and reflect that in service
config

Maybe we should gather few of hard use cases like that and make sure
we can address them in our config schema?

>
> Flavio
>
> --
> @flaper87
> Flavio Percoco
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Ansible roles repo and how to inject them into the overcloud

2017-06-08 Thread Ben Nemec

On 06/07/2017 09:25 AM, Juan Antonio Osorio wrote:

Hi folks!

I would like to know if there are thoughts about where to put
tripleo-specific ansible roles.

I've been working lately on a role that would deploy ipsec tunnels for
most networks in an overcloud [1]. And I think that would be quite
useful for folks as an alternative to TLS everywhere. However, I don't
know in what TripleO repository I could put that role. Any ideas?

Also, I know I could call that from a composable service (although I
would need that to be ran after the puppet steps so maybe I'll need an
extra hook). However, is there any recommended way right now on how to
inject extra ansible roles into the overcloud nodes? If not, maybe a
dedicated hook to do this kind of thing would be something useful for
others as well.


I believe you could use the artifact deployment hook.  It can drop files 
anywhere on the filesystem.


http://hardysteven.blogspot.com/2016/08/tripleo-deploy-artifacts-and-puppet.html

If this is a thing we expect to be doing a lot we might consider adding 
an ansible-specific version like we did for puppet.




Any thoughts?

[1] https://github.com/JAORMX/tripleo-ipsec

--
Juan Antonio Osorio R.
e-mail: jaosor...@gmail.com 



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [forum] Future of Stackalytics

2017-06-08 Thread Ken'ichi Ohmichi
2017-06-08 7:19 GMT-07:00 Jeremy Stanley :
> On 2017-06-07 16:36:45 -0700 (-0700), Ken'ichi Ohmichi wrote:
> [...]
>> one of config files is 30KL due to much user information and that
>> makes the maintenance hard now. I am trying to separate user part
>> from the existing file but I cannot find the way to make a
>> consensus for such thing.
>
> There is a foundation member directory API now which provides
> affiliation details and history, so if it were my project (it's not
> though) I'd switch to querying that and delete all the static
> affiliation mapping out of that config instead. Not only would it
> significantly reduce the reviewer load for Stackalytics, but it
> would also provide a greater incentive for contributors to keep
> their affiliation data updated in the foundation member directory.

Interesting idea, thanks.
It would be nice to centralize such information into a single place.
Can I know the detail of the API? I'd like to take a look for some prototyping.

>> In addition, we have two ways for managing bug reports: launchpad and
>> storyboard if considering it as infra project.
>
> It's not (at least presently) an Infrastructure team deliverable.
> It's only an unofficial project which happens to have granted the
> infra-core team approval rights (for reasons I don't recall, if I
> ever even knew it was the case before now).
>
>> It would be necessary to transport them from launchpad, I guess.
> [...]
>
> If its maintainers want to migrate from LP to SB, we already have an
> import script which copies in all the existing bug reports so that's
> not really a challenge.

Oh, cool. Glad to hear that :)

Thanks

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [OpenStack-Ansible][OSA] Next week's meeting is cancelled

2017-06-08 Thread Amy Marrich
Just wanted to give everyone a heads up that next week's meeting on 6/15
has been cancelled.

Thanks all!

Amy (spotz)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo] Install Kubernetes in the overcloud using TripleO

2017-06-08 Thread Flavio Percoco

Hey y'all,

Just wanted to give an updated on the work around tripleo+kubernetes. This is
still far in the future but as we move tripleo to containers using docker-cmd,
we're also working on the final goal, which is to have it run these containers
on kubernetes.

One of the first steps is to have TripleO install Kubernetes in the overcloud
nodes and I've moved forward with this work:

https://review.openstack.org/#/c/471759/

The patch depends on the `ceph-ansible` work and it uses the mistral-ansible
action to deploy kubernetes by leveraging kargo. As it is, the patch doesn't
quite work as it requires some files to be in some places (ssh keys) and a
couple of other things. None of these "things" are blockers as in they can be
solved by just sending some patches here and there.

I thought I'd send this out as an update and to request some early feedback on
the direction of this patch. The patch, of course, works in my local environment
;)

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] etcd3 as base service - update

2017-06-08 Thread Mike Bayer



On 06/08/2017 12:47 AM, Joshua Harlow wrote:
So just out of curiosity, but do people really even know what etcd is 
good for? I am thinking that there should be some guidance from folks in 
the community as to where etcd should be used and where it shouldn't 
(otherwise we just all end up in a mess).


So far I've seen a proposal of etcd3 as a replacement for memcached in 
keystone, and a new dogpile connector was added to oslo.cache to handle 
referring to etcd3 as a cache backend.  This is a really simplistic / 
minimal kind of use case for a key-store.


But, keeping in mind I don't know anything about etcd3 other than "it's 
another key-store", it's the only database used by Kubernetes as a 
whole, which suggests it's doing a better job than Redis in terms of 
"durable".   So I wouldn't be surprised if new / existing openstack 
applications express some gravitational pull towards using it as their 
own datastore as well.I'll be trying to hang onto the etcd3 track as 
much as possible so that if/when that happens I still have a job :).






Perhaps a good idea to actually give examples of how it should be used, 
how it shouldn't be used, what it offers, what it doesn't... Or at least 
provide links for people to read up on this.


Thoughts?

Davanum Srinivas wrote:

One clarification: Since https://pypi.python.org/pypi/etcd3gw just
uses the HTTP API (/v3alpha) it will work under both eventlet and
non-eventlet environments.

Thanks,
Dims


On Wed, Jun 7, 2017 at 6:47 AM, Davanum Srinivas  
wrote:

Team,

Here's the update to the base services resolution from the TC:
https://governance.openstack.org/tc/reference/base-services.html

First request is to Distros, Packagers, Deployers, anyone who
installs/configures OpenStack:
Please make sure you have latest etcd 3.x available in your
environment for Services to use, Fedora already does, we need help in
making sure all distros and architectures are covered.

Any project who want to use etcd v3 API via grpc, please use:
https://pypi.python.org/pypi/etcd3 (works only for non-eventlet 
services)


Those that depend on eventlet, please use the etcd3 v3alpha HTTP API 
using:

https://pypi.python.org/pypi/etcd3gw

If you use tooz, there are 2 driver choices for you:
https://github.com/openstack/tooz/blob/master/setup.cfg#L29
https://github.com/openstack/tooz/blob/master/setup.cfg#L30

If you use oslo.cache, there is a driver for you:
https://github.com/openstack/oslo.cache/blob/master/setup.cfg#L33

Devstack installs etcd3 by default and points cinder to it:
http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/etcd3
http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/cinder#n356 



Review in progress for keystone to use etcd3 for caching:
https://review.openstack.org/#/c/469621/

Doug is working on proposal(s) for oslo.config to store some
configuration in etcd3:
https://review.openstack.org/#/c/454897/

So, feel free to turn on / test with etcd3 and report issues.

Thanks,
Dims

--
Davanum Srinivas :: https://twitter.com/dims






__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-08 Thread Flavio Percoco

On 08/06/17 18:23 +0200, Flavio Percoco wrote:

On 07/06/17 12:04 +0200, Bogdan Dobrelya wrote:

On 06.06.2017 18:08, Emilien Macchi wrote:

Another benefit is that confd will generate a configuration file when
the application will start. So if etcd is down *after* the app
startup, it shouldn't break the service restart if we don't ask confd
to re-generate the config. It's good for operators who were concerned
about the fact the infrastructure would rely on etcd. In that case, we
would only need etcd at the initial deployment (and during lifecycle
actions like upgrades, etc).

The downside is that in the case of containers, they would still have
a configuration file within the container, and the whole goal of this
feature was to externalize configuration data and stop having
configuration files.


It doesn't look a strict requirement. Those configs may (and should) be
bind-mounted into containers, as hostpath volumes. Or, am I missing
something what *does* make embedded configs a strict requirement?..


mmh, one thing I liked about this effort was possibility of stop bind-mounting
config files into the containers. I'd rather find a way to not need any
bindmount and have the services get their configs themselves.


Probably sent too early!

If we're not talking about OpenStack containers running in a COE, I guess this
is fine. For k8s based deployments, I think I'd prefer having installers
creating configmaps directly and use that. The reason is that depending on files
that are in the host is not ideal for these scenarios. I hate this idea because
it makes deployments inconsistent and I don't want that.

Flavio

--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-08 Thread Flavio Percoco

On 07/06/17 12:04 +0200, Bogdan Dobrelya wrote:

On 06.06.2017 18:08, Emilien Macchi wrote:

Another benefit is that confd will generate a configuration file when
the application will start. So if etcd is down *after* the app
startup, it shouldn't break the service restart if we don't ask confd
to re-generate the config. It's good for operators who were concerned
about the fact the infrastructure would rely on etcd. In that case, we
would only need etcd at the initial deployment (and during lifecycle
actions like upgrades, etc).

The downside is that in the case of containers, they would still have
a configuration file within the container, and the whole goal of this
feature was to externalize configuration data and stop having
configuration files.


It doesn't look a strict requirement. Those configs may (and should) be
bind-mounted into containers, as hostpath volumes. Or, am I missing
something what *does* make embedded configs a strict requirement?..


mmh, one thing I liked about this effort was possibility of stop bind-mounting
config files into the containers. I'd rather find a way to not need any
bindmount and have the services get their configs themselves.

Flavio


--
@flaper87
Flavio Percoco


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [TripleO] PSA: Please WIP your WIP patches

2017-06-08 Thread Ben Nemec
This makes it easier to filter them out when looking for wayward 
patches.  Right now the list of 15 oldest reviews includes at least 5 
WIP patches that weren't marked as such in Gerrit.


For anyone who's not aware, to mark a patch WIP in Gerrit you set 
Workflow -1 on it.  Anyone can do that to their own patches, and I 
believe cores can do it to any patch (but I'd rather not because then I 
end up subscribed :-).


Thanks.

-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ironic] Hardware provisioning testing for Ocata

2017-06-08 Thread Justin Kilpatrick
Hi Emilien,

I'll try and get a list of the Perf teams TripleO deployment
bugs and bring them to the deployment hackfest.

I look forward to participating!

- Justin

On Thu, Jun 8, 2017 at 11:10 AM, Emilien Macchi  wrote:
> On Thu, Jun 8, 2017 at 2:21 PM, Justin Kilpatrick  wrote:
>> Morning everyone,
>>
>> I've been working on a performance testing tool for TripleO hardware
>> provisioning operations off and on for about a year now and I've been
>> using it to try and collect more detailed data about how TripleO
>> performs in scale and production use cases. Perhaps more importantly
>> YODA (Yet Openstack Deployment Tool, Another) automates the task
>> enough that days of deployment testing is a set it and forget it
>> operation.
>>
>> You can find my testing tool here [0] and the test report [1] has
>> links to raw data and visualization. Just scroll down, click the
>> capcha and click "go to kibana". I  still need to port that machine
>> from my own solution over to search guard.
>>
>> If you have too much email to consider clicking links I'll copy the
>> results summary here.
>>
>> TripleO inspection workflows have seen massive improvements from
>> Newton with a failure rate for 50 nodes with the default workflow
>> falling from 100% to <15%. Using patches slated for Pike that spurious
>> failure rate reaches zero.
>>
>> Overcloud deployments show a significant improvement of deployment
>> speed in HA and stack update tests.
>>
>> Ironic deployments in the overcloud allow the use of Ironic for bare
>> metal scale out alongside more traditional VM compute. Considering a
>> single conductor starts to struggle around 300 nodes it will be
>> difficult to push a multi conductor setup to it's limits.
>>
>> Finally Ironic node cleaning, shows a similar failure rate to
>> inspection and will require similar attention in TripleO workflows to
>> become painless.
>>
>> [0] https://review.openstack.org/#/c/384530/
>> [1] 
>> https://docs.google.com/document/d/194ww0Pi2J-dRG3-X75mphzwUZVPC2S1Gsy1V0K0PqBo/
>>
>> Thanks for your time!
>
> Hey Justin,
>
> All of this is really cool. I was wondering if you had a list of bugs
> that you've faced or reported yourself regarding to performances
> issues in TripleO.
> As you might have seen in a separate thread on openstack-dev, we're
> planning a sprint on June 21/22th to improve performances in TripleO.
> We would love your participation or someone from your team and if you
> have time before, please add the deployment-time tag to the Launchpad
> bugs that you know related to performances.
>
> Thanks a lot,
>
>> - Justin
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

2017-06-08 Thread Matt Riedemann

On 6/8/2017 10:17 AM, Arne Wiebalck wrote:


On 08 Jun 2017, at 15:58, Matt Riedemann > wrote:


Nova stores the output of the Cinder os-initialize_connection info API 
in the Nova block_device_mappings table, and uses that later for 
making volume connections.


This data can get out of whack or need to be refreshed, like if your 
ceph server IP changes, or you need to recycle some secret uuid for 
your ceph cluster.


I think the only ways to do this on the nova side today are via volume 
detach/re-attach, reboot, migrations, etc - all of which, except live 
migration, are disruptive to the running guest.


I've kicked around the idea of adding some sort of admin API interface 
for refreshing the BDM.connection_info on-demand if needed by an 
operator. Does anyone see value in this? Are operators doing stuff 
like this already, but maybe via direct DB updates?


We could have something in the compute API which calls down to the 
compute for an instance and has it refresh the connection_info from 
Cinder and updates the BDM table in the nova DB. It could be an admin 
action API, or part of the os-server-external-events API, like what we 
have for the 'network-changed' event sent from Neutron which nova uses 
to refresh the network info cache.


Other ideas or feedback here?


I have opened https://bugs.launchpad.net/cinder/+bug/1452641 for this 
issue some time ago.
Back then I was more thinking of using an alias and not deal with IP 
addresses directly. From
what I understand, this should work with Ceph. In any case, there is 
still interest in a fix :-)


Cheers,
  Arne


--
Arne Wiebalck
CERN IT



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Yeah this was also discussed in the dev mailing list over a year ago:

http://lists.openstack.org/pipermail/openstack-dev/2016-May/095170.html

At that time I was opposed to a REST API for a *user* doing this, but 
I'm more open to an *admin* (by default) doing this. Also, if it were 
initiated via the volume API then Cinder could call the Nova 
os-server-external-events API which is admin-only by default and then 
Nova can do a refresh.


Later in that thread Melanie Witt also has an idea about doing a refresh 
in a periodic task on the compute service, like we do for refreshing the 
instance network info cache with Neutron in a periodic task.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron][l2gw] OVS code currently broken

2017-06-08 Thread Ricardo Noriega De Soto
There is actually a bunch of patches waiting to be reviewed and approved.

Please, we'd need core reviewers to jump in.

I'd like to thank Gary for all his support and reviews.

Thanks Gary!

On Tue, May 30, 2017 at 3:56 PM, Gary Kotton  wrote:

> Hi,
>
> Please note that the L2 GW code is currently broken due to the commit
> e6333593ae6005c4b0d73d9dfda5eb47f40dd8da
>
> If someone has the cycles can they please take a look.
>
> Thanks
>
> gary
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Ricardo Noriega

Senior Software Engineer - NFV Partner Engineer | Office of Technology  |
Red Hat
irc: rnoriega @freenode
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ptls][all][tc][docs] Documentation migration spec

2017-06-08 Thread Alexandra Settle
Hi everyone,

Doug and I have written up a spec following on from the conversation [0] that 
we had regarding the documentation publishing future.

Please take the time out of your day to review the spec as this affects 
*everyone*.

See: https://review.openstack.org/#/c/472275/

I will be PTO from the 9th – 19th of June. If you have any pressing concerns, 
please email me and I will get back to you as soon as I can, or, email Doug 
Hellmann and hopefully he will be able to assist you.

Thanks,

Alex

[0] http://lists.openstack.org/pipermail/openstack-dev/2017-May/117162.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

2017-06-08 Thread Arne Wiebalck

On 08 Jun 2017, at 15:58, Matt Riedemann 
> wrote:

Nova stores the output of the Cinder os-initialize_connection info API in the 
Nova block_device_mappings table, and uses that later for making volume 
connections.

This data can get out of whack or need to be refreshed, like if your ceph 
server IP changes, or you need to recycle some secret uuid for your ceph 
cluster.

I think the only ways to do this on the nova side today are via volume 
detach/re-attach, reboot, migrations, etc - all of which, except live 
migration, are disruptive to the running guest.

I've kicked around the idea of adding some sort of admin API interface for 
refreshing the BDM.connection_info on-demand if needed by an operator. Does 
anyone see value in this? Are operators doing stuff like this already, but 
maybe via direct DB updates?

We could have something in the compute API which calls down to the compute for 
an instance and has it refresh the connection_info from Cinder and updates the 
BDM table in the nova DB. It could be an admin action API, or part of the 
os-server-external-events API, like what we have for the 'network-changed' 
event sent from Neutron which nova uses to refresh the network info cache.

Other ideas or feedback here?

I have opened https://bugs.launchpad.net/cinder/+bug/1452641 for this issue 
some time ago.
Back then I was more thinking of using an alias and not deal with IP addresses 
directly. From
what I understand, this should work with Ceph. In any case, there is still 
interest in a fix :-)

Cheers,
 Arne


--
Arne Wiebalck
CERN IT

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [ci] Adding idempotency job on overcloud deployment.

2017-06-08 Thread Emilien Macchi
On Thu, Jun 8, 2017 at 1:47 PM, Sofer Athlan-Guyot  wrote:
> Hi,
>
> Alex Schultz  writes:
>
>> On Wed, Jun 7, 2017 at 5:20 AM, Sofer Athlan-Guyot  
>> wrote:
>>> Hi,
>>>
>>> Emilien Macchi  writes:
>>>
 On Wed, Jun 7, 2017 at 12:45 PM, Sofer Athlan-Guyot  
 wrote:
> Hi,
>
> I don't think we have such a job in place.  Basically that would check
> that re-running the "openstack deploy ..." command won't do anything.
>
> I've had a look at openstack-infra/tripleo-ci.  Should I test it in with
> ovb/quickstart or tripleo.sh.  Both way are fine by me, but I may be
> lacking context about which one is more relevant.
>
> We had such an error by the past[1], but I'm not sure this has been
> captured by an associated job.
>
> WDYT ?

 It would be interesting to measure how much time does it take to run
 it again.
>>>
>>> Could you point out how such an experiment could be done ?
>>>
 If it's short, we could add it to all our scenarios + ovb
 jobs.  If it's long, maybe we need an additional job, but it would
 take more resources, so maybe we could run it in periodic pipeline
 (note that periodic jobs are not optimal since we could break
 something quite easily).
>>>
>>> Just adding as context that the issue was already raised[1].  Beside
>>> time constraint, it was pointed out that we would also need to parse the
>>> log to find out if anything was restarted.  But it could be a second
>>> step.  For parsing, this code was pointed out[2].
>>>
>>
>> There's a few things that would need to be enabled in order to reuse
>> some of this work.  We'll need to add the ability to generate a report
>> on the puppet run[0]. And then we'll need to be able to capture it[1]
>> somewhere that we could then use that parsing code on.  From there,
>> just rerunning the installation would be a simple start to the
>> idempotency check.  In fuel, we had hacked in a special flag[2] that
>> we used in testing to actually rerun the task immediately to find when
>> a specific task was not idempotent in addition to also rerunning the
>> entire deployment. For tripleo a similar concept would be to rerun the
>> steps twice but that's usually not where the issues crop us for us. So
>> rerunning the entire installation deployment would be better as we
>> tend to have issues with configuration items between steps
>> conflicting.
>
> Maybe we could go with something equivalent to:
>
>   ts="$(date '+%F %T')"
>   ... re-run deploy command ...
>
>   sudo journalctl --since="${ts}" | egrep 'Stopping|Starting' | grep -v 
> 'user.*slice' > restarted.log
>   wc -l restarted.log
>
> This should be 0 on every overcloud nodes.
>
> This is simpler to implement and should catch any unwanted service
> restart.
>
> WDYT ?

It's smart, for services. It doesn't cover configuration files changes
and other resources managed by Puppet, like Keystone resources, etc.
But it's an excellent start to me.

>>
>> Thanks,
>> -Alex
>>
>> [0] https://review.openstack.org/#/c/273740/4/mcagents/puppetd.rb@204
>> [1] https://review.openstack.org/#/c/273740/4/mcagents/puppetd.rb@102
>> [2] https://review.openstack.org/#/c/273737/
>>
>>> [1] 
>>> http://lists.openstack.org/pipermail/openstack-dev/2017-March/114836.html
>>> [2] 
>>> https://review.openstack.org/#/c/279271/9/fuelweb_test/helpers/astute_log_parser.py@212
>>>

> [1] https://bugs.launchpad.net/tripleo/+bug/1664650
> --
> Sofer Athlan-Guyot
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Emilien Macchi

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>> --
>>> Sofer Athlan-Guyot
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> Thanks,
> --
> Sofer Athlan-Guyot
>
> __
> OpenStack Development 

[openstack-dev] [openstack-doc] [dev] Doc team meeting cancellation

2017-06-08 Thread Alexandra Settle
Hi everyone,

Next week, Thursday the 15th of June, the OpenStack manuals meeting is 
cancelled as I will be PTO.

The next documentation team meeting will be the following bi-weekly allotment 
on the 29th of June.

If anyone has any pressing concerns, please email me and I will get back to you 
when I am able. I will return online on the 19th of June.

Thanks,

Alex
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo][ironic] Hardware provisioning testing for Ocata

2017-06-08 Thread Emilien Macchi
On Thu, Jun 8, 2017 at 2:21 PM, Justin Kilpatrick  wrote:
> Morning everyone,
>
> I've been working on a performance testing tool for TripleO hardware
> provisioning operations off and on for about a year now and I've been
> using it to try and collect more detailed data about how TripleO
> performs in scale and production use cases. Perhaps more importantly
> YODA (Yet Openstack Deployment Tool, Another) automates the task
> enough that days of deployment testing is a set it and forget it
> operation.
>
> You can find my testing tool here [0] and the test report [1] has
> links to raw data and visualization. Just scroll down, click the
> capcha and click "go to kibana". I  still need to port that machine
> from my own solution over to search guard.
>
> If you have too much email to consider clicking links I'll copy the
> results summary here.
>
> TripleO inspection workflows have seen massive improvements from
> Newton with a failure rate for 50 nodes with the default workflow
> falling from 100% to <15%. Using patches slated for Pike that spurious
> failure rate reaches zero.
>
> Overcloud deployments show a significant improvement of deployment
> speed in HA and stack update tests.
>
> Ironic deployments in the overcloud allow the use of Ironic for bare
> metal scale out alongside more traditional VM compute. Considering a
> single conductor starts to struggle around 300 nodes it will be
> difficult to push a multi conductor setup to it's limits.
>
> Finally Ironic node cleaning, shows a similar failure rate to
> inspection and will require similar attention in TripleO workflows to
> become painless.
>
> [0] https://review.openstack.org/#/c/384530/
> [1] 
> https://docs.google.com/document/d/194ww0Pi2J-dRG3-X75mphzwUZVPC2S1Gsy1V0K0PqBo/
>
> Thanks for your time!

Hey Justin,

All of this is really cool. I was wondering if you had a list of bugs
that you've faced or reported yourself regarding to performances
issues in TripleO.
As you might have seen in a separate thread on openstack-dev, we're
planning a sprint on June 21/22th to improve performances in TripleO.
We would love your participation or someone from your team and if you
have time before, please add the deployment-time tag to the Launchpad
bugs that you know related to performances.

Thanks a lot,

> - Justin
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][scheduler][placement] Allocating Complex Resources

2017-06-08 Thread Edward Leafe
Sorry for the top-post, but it seems that nobody has responded to this, and 
there are a lot of important questions that need answers. So I’m simply 
re-posting this so that we don’t get too ahead of ourselves, by planning 
implementations before we fully understand the problem and the implications of 
any proposed solution.


-- Ed Leafe


> On Jun 6, 2017, at 9:56 AM, Chris Dent  wrote:
> 
> On Mon, 5 Jun 2017, Ed Leafe wrote:
> 
>> One proposal is to essentially use the same logic in placement
>> that was used to include that host in those matching the
>> requirements. In other words, when it tries to allocate the amount
>> of disk, it would determine that that host is in a shared storage
>> aggregate, and be smart enough to allocate against that provider.
>> This was referred to in our discussion as "Plan A".
> 
> What would help for me is greater explanation of if and if so, how and
> why, "Plan A" doesn't work for nested resource providers.
> 
> We can declare that allocating for shared disk is fairly deterministic
> if we assume that any given compute node is only associated with one
> shared disk provider.
> 
> My understanding is this determinism is not the case with nested
> resource providers because there's some fairly late in the game
> choosing of which pci device or which numa cell is getting used.
> The existing resource tracking doesn't have this problem because the
> claim of those resources is made very late in the game. < Is this
> correct?
> 
> The problem comes into play when we want to claim from the scheduler
> (or conductor). Additional information is required to choose which
> child providers to use. <- Is this correct?
> 
> Plan B overcomes the information deficit by including more
> information in the response from placement (as straw-manned in the
> etherpad [1]) allowing code in the filter scheduler to make accurate
> claims. <- Is this correct?
> 
> For clarity and completeness in the discussion some questions for
> which we have explicit answers would be useful. Some of these may
> appear ignorant or obtuse and are mostly things we've been over
> before. The goal is to draw out some clear statements in the present
> day to be sure we are all talking about the same thing (or get us
> there if not) modified for what we know now, compared to what we
> knew a week or month ago.
> 
> * We already have the information the filter scheduler needs now by
>  some other means, right?  What are the reasons we don't want to
>  use that anymore?
> 
> * Part of the reason for having nested resource providers is because
>  it can allow affinity/anti-affinity below the compute node (e.g.,
>  workloads on the same host but different numa cells). If I
>  remember correctly, the modelling and tracking of this kind of
>  information in this way comes out of the time when we imagined the
>  placement service would be doing considerably more filtering than
>  is planned now. Plan B appears to be an acknowledgement of "on
>  some of this stuff, we can't actually do anything but provide you
>  some info, you need to decide". If that's the case, is the
>  topological modelling on the placement DB side of things solely a
>  convenient place to store information? If there were some other
>  way to model that topology could things currently being considered
>  for modelling as nested providers be instead simply modelled as
>  inventories of a particular class of resource?
>  (I'm not suggesting we do this, rather that the answer that says
>  why we don't want to do this is useful for understanding the
>  picture.)
> 
> * Does a claim made in the scheduler need to be complete? Is there
>  value in making a partial claim from the scheduler that consumes a
>  vcpu and some ram, and then in the resource tracker is corrected
>  to consume a specific pci device, numa cell, gpu and/or fpga?
>  Would this be better or worse than what we have now? Why?
> 
> * What is lacking in placement's representation of resource providers
>  that makes it difficult or impossible for an allocation against a
>  parent provider to be able to determine the correct child
>  providers to which to cascade some of the allocation? (And by
>  extension make the earlier scheduling decision.)
> 
> That's a start. With answers to at last some of these questions I
> think the straw man in the etherpad can be more effectively
> evaluated. As things stand right now it is a proposed solution
> without a clear problem statement. I feel like we could do with a
> more clear problem statement.
> 
> Thanks.
> 
> [1] https://etherpad.openstack.org/p/placement-allocations-straw-man
> 
> -- 
> Chris Dent  ┬──┬◡ノ(° -°ノ)   https://anticdent.org/
> freenode: cdent tw: 
> @anticdent__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] [Nova][Scheduler]

2017-06-08 Thread Matt Riedemann

On 6/8/2017 3:36 AM, Narendra Pal Singh wrote:
Does Ocata bits support adding custom resource monitor say network 
bandwidth?


I don't believe so in the upstream code. There is only a CPU bandwidth 
monitor in-tree today, but only supported by the libvirt driver and 
untested anywhere in our integration testing.


Nova Scheduler should consider new metric data for cost calculation each 
filtered host.


There was an attempt in Liberty, Mitaka and Newton to add a new memory 
bandwidth monitor:


https://specs.openstack.org/openstack/nova-specs/specs/newton/approved/memory-bw.html

But we eventually said no to that, and stated why here:

https://docs.openstack.org/developer/nova/policies.html#metrics-gathering

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove] Trove reboot meeting

2017-06-08 Thread MCCASLAND, TREVOR
Ok everyone here is the link to join. Please only join if you plan to 
participate as there are a limit on these calls


https://hangouts.google.com/call/ncgjbzal5p7ttelna3ovzqu 
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] A proposal for hackathon to reduce deploy time of TripleO

2017-06-08 Thread Emilien Macchi
On Thu, Jun 8, 2017 at 4:19 PM, Sagi Shnaidman  wrote:
> Hi, all
>
> Thanks for your attention and proposals for this hackathon. With full
> understanding that optimization of deployment is on-going effort and should
> not be started and finished in these 2 days only, we still want to get focus
> on these issues in the sprint. Even if we don't solve immediately all
> problems, more people will be exposed to this field, additional tasks/bugs
> could be opened and scheduled, and maybe additional tests, process
> improvements and other insights will be introduced.
> If we don't reduce ci job time to 1 hour in Thursday it doesn't mean we
> failed the mission, please remember.
> The main goal of this sprint is to find problems and their work scope, and
> to find as many as possible solutions for them, using inter-team and team
> members collaboration and sharing knowledge. Ideally this collaboration and
> on-going effort will go further with such momentum. :)
>
> I suggest to do it in 21 - 22 Jun 2017 (Wednesday - Thursday). All other
> details are provided in etherpad:
> https://etherpad.openstack.org/p/tripleo-deploy-time-hack and in wiki as
> well: https://wiki.openstack.org/wiki/VirtualSprints
> We have a "deployment-time" tag for bugs:
> https://bugs.launchpad.net/tripleo/+bugs?field.tag=deployment-time Please
> use it for bugs that affect deployment time or CI job run time. It will be
> easier to handle them in the sprint.
>
> Please provide your comments and suggestions.

Thanks Sagi for bringing this up, this is really awesome.
One thing we could do to make this sprint productive is to report /
triage Launchpad bugs related to $topic so we have a list of things we
can work on during these 2 days.

Maybe we could go through:
https://launchpad.net/tripleo/+milestone/pike-2
https://launchpad.net/tripleo/+milestone/pike-3 and add
deployment-time to all the bugs we think it's related to performances.

Once we have the list, we'll work on them by priority and by area of knowledge.

Also, folks like face to face interactions. We'll take care of
preparing an open Bluejeans where folks can easily join and ask
questions. We'll probably be connected all day, so anyone can join
anytime. No schedule constraint here.

Any feedback is welcome,

Thanks!

> Thanks
>
>
>
> On Tue, May 23, 2017 at 1:47 PM, Sagi Shnaidman  wrote:
>>
>> Hi, all
>>
>> I'd like to propose an idea to make one or two days hackathon in TripleO
>> project with main goal - to reduce deployment time of TripleO.
>>
>> - How could it be arranged?
>>
>> We can arrange a separate IRC channel and Bluejeans video conference
>> session for hackathon in these days to create a "presence" feeling.
>>
>> - How to participate and contribute?
>>
>> We'll have a few responsibility fields like tripleo-quickstart,
>> containers, storage, HA, baremetal, etc - the exact list should be ready
>> before the hackathon so that everybody could assign to one of these "teams".
>> It's good to have somebody in team to be stakeholder and responsible for
>> organization and tasks.
>>
>> - What is the goal?
>>
>> The goal of this hackathon to reduce deployment time of TripleO as much as
>> possible.
>>
>> For example part of CI team takes a task to reduce quickstart tasks time.
>> It includes statistics collection, profiling and detection of places to
>> optimize. After this tasks are created, patches are tested and submitted.
>>
>> The prizes will be presented to teams which saved most of time :)
>>
>> What do you think?
>>
>> Thanks
>> --
>> Best regards
>> Sagi Shnaidman
>
>
>
>
> --
> Best regards
> Sagi Shnaidman



-- 
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [keystone][nova][cinder][glance][neutron][horizon][policy] defining admin-ness

2017-06-08 Thread Lance Bragstad
Ok - based on the responses in the thread here, I've re-proposed the global
roles specification to keystone's backlog [0]. I'll start working on the
implementation and get something in review as soon as possible. I'll plan
to move the specification from backlog to Queens once development opens.

Thanks for all the feedback and patience.


[0] https://review.openstack.org/#/c/464763/

On Tue, Jun 6, 2017 at 4:39 PM, Marc Heckmann 
wrote:

> On Tue, 2017-06-06 at 17:01 -0400, Erik McCormick wrote:
> > On Tue, Jun 6, 2017 at 4:44 PM, Lance Bragstad 
> > wrote:
> > >
> > >
> > > On Tue, Jun 6, 2017 at 3:06 PM, Marc Heckmann  > > t.com>
> > > wrote:
> > > >
> > > > Hi,
> > > >
> > > > On Tue, 2017-06-06 at 10:09 -0500, Lance Bragstad wrote:
> > > >
> > > > Also, with all the people involved with this thread, I'm curious
> > > > what the
> > > > best way is to get consensus. If I've tallied the responses
> > > > properly, we
> > > > have 5 in favor of option #2 and 1 in favor of option #3. This
> > > > week is spec
> > > > freeze for keystone, so I see a slim chance of this getting
> > > > committed to
> > > > Pike [0]. If we do have spare cycles across the team we could
> > > > start working
> > > > on an early version and get eyes on it. If we straighten out
> > > > everyone
> > > > concerns early we could land option #2 early in Queens.
> > > >
> > > >
> > > > I was the only one in favour of option 3 only because I've spent
> > > > a bunch
> > > > of time playing with option #1 in the past. As I mentioned
> > > > previously in the
> > > > thread, if #2 is more in line with where the project is going,
> > > > then I'm all
> > > > for it. At this point, the admin scope issue has been around long
> > > > enough
> > > > that Queens doesn't seem that far off.
> > >
> > >
> > > From an administrative point-of-view, would you consider option #1
> > > or option
> > > #2 to better long term?
>
> #2
>
> > >
> >
> > Count me as another +1 for option 2. It's the right way to go long
> > term, and we've lived with how it is now long enough that I'm OK
> > waiting a release or even 2 more for it with things as is. I think
> > option 3 would just muddy the waters.
> >
> > -Erik
> >
> > > >
> > > >
> > > > -m
> > > >
> > > >
> > > > I guess it comes down to how fast folks want it.
> > > >
> > > > [0] https://review.openstack.org/#/c/464763/
> > > >
> > > > On Tue, Jun 6, 2017 at 10:01 AM, Lance Bragstad  > > > com>
> > > > wrote:
> > > >
> > > > I replied to John, but directly. I'm sending the responses I sent
> > > > to him
> > > > but with the intended audience on the thread. Sorry for not
> > > > catching that
> > > > earlier.
> > > >
> > > >
> > > > On Fri, May 26, 2017 at 2:44 AM, John Garbutt  > > > om>
> > > > wrote:
> > > >
> > > > +1 on not forcing Operators to transition to something new twice,
> > > > even if
> > > > we did go for option 3.
> > > >
> > > >
> > > > The more I think about this, the more it worries me from a
> > > > developer
> > > > perspective. If we ended up going with option 3, then we'd be
> > > > supporting
> > > > both methods of elevating privileges. That means two paths for
> > > > doing the
> > > > same thing in keystone. It also means oslo.context,
> > > > keystonemiddleware, or
> > > > any other library consuming tokens that needs to understand
> > > > elevated
> > > > privileges needs to understand both approaches.
> > > >
> > > >
> > > >
> > > > Do we have an agreed non-distruptive upgrade path mapped out yet?
> > > > (For any
> > > > of the options) We spoke about fallback rules you pass but with a
> > > > warning to
> > > > give us a smoother transition. I think that's my main objection
> > > > with the
> > > > existing patches, having to tell all admins to get their token
> > > > for a
> > > > different project, and give them roles in that project, all
> > > > before being
> > > > able to upgrade.
> > > >
> > > >
> > > > Thanks for bringing up the upgrade case! You've kinda described
> > > > an upgrade
> > > > for option 1. This is what I was thinking for option 2:
> > > >
> > > > - deployment upgrades to a release that supports global role
> > > > assignments
> > > > - operator creates a set of global roles (i.e. global_admin)
> > > > - operator grants global roles to various people that need it
> > > > (i.e. all
> > > > admins)
> > > > - operator informs admins to create globally scoped tokens
> > > > - operator rolls out necessary policy changes
> > > >
> > > > If I'm thinking about this properly, nothing would change at the
> > > > project-scope level for existing users (who don't need a global
> > > > role
> > > > assignment). I'm hoping someone can help firm ^ that up or
> > > > improve it if
> > > > needed.
> > > >
> > > >
> > > >
> > > > Thanks,
> > > > johnthetubaguy
> > > >
> > > > On Fri, 26 May 2017 at 08:09, Belmiro Moreira
> > > > 

Re: [openstack-dev] [forum] Future of Stackalytics

2017-06-08 Thread Paul Belanger
On Thu, Jun 08, 2017 at 02:11:18PM +, Jeremy Stanley wrote:
> On 2017-06-08 09:22:48 -0400 (-0400), Paul Belanger wrote:
> [...]
> > We also have another issue where we loose access to gerrit and our apache
> > process pins CPU to 100%, these might also be low hanging friut for people
> > wanting to get involved.
> [...]
> 
> Wasn't that fixed with a new lower-bound on paramiko so it now
> closes SSH API sessions correctly?
>
We did submit a patch, but I believe we are still leaking some connections to
gerrit. We likley need to audit the code to ensure we applied the patch to all
connection attempts.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [forum] Future of Stackalytics

2017-06-08 Thread Jeremy Stanley
On 2017-06-07 16:36:45 -0700 (-0700), Ken'ichi Ohmichi wrote:
[...]
> one of config files is 30KL due to much user information and that
> makes the maintenance hard now. I am trying to separate user part
> from the existing file but I cannot find the way to make a
> consensus for such thing.

There is a foundation member directory API now which provides
affiliation details and history, so if it were my project (it's not
though) I'd switch to querying that and delete all the static
affiliation mapping out of that config instead. Not only would it
significantly reduce the reviewer load for Stackalytics, but it
would also provide a greater incentive for contributors to keep
their affiliation data updated in the foundation member directory.

> In addition, we have two ways for managing bug reports: launchpad and
> storyboard if considering it as infra project.

It's not (at least presently) an Infrastructure team deliverable.
It's only an unofficial project which happens to have granted the
infra-core team approval rights (for reasons I don't recall, if I
ever even knew it was the case before now).

> It would be necessary to transport them from launchpad, I guess.
[...]

If its maintainers want to migrate from LP to SB, we already have an
import script which copies in all the existing bug reports so that's
not really a challenge.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] A proposal for hackathon to reduce deploy time of TripleO

2017-06-08 Thread Sagi Shnaidman
Hi, all

Thanks for your attention and proposals for this hackathon. With full
understanding that optimization of deployment is on-going effort and should
not be started and finished in these 2 days only, we still want to get
focus on these issues in the sprint. Even if we don't solve immediately all
problems, more people will be exposed to this field, additional tasks/bugs
could be opened and scheduled, and maybe additional tests, process
improvements and other insights will be introduced.
If we don't reduce ci job time to 1 hour in Thursday it doesn't mean we
failed the mission, please remember.
The main goal of this sprint is to find problems and their work scope, and
to find as many as possible solutions for them, using inter-team and team
members collaboration and sharing knowledge. Ideally this collaboration and
on-going effort will go further with such momentum. :)

I suggest to do it in 21 - 22 Jun 2017 (Wednesday - Thursday). All other
details are provided in etherpad:
https://etherpad.openstack.org/p/tripleo-deploy-time-hack and in wiki as
well: https://wiki.openstack.org/wiki/VirtualSprints
We have a "deployment-time" tag for bugs:
https://bugs.launchpad.net/tripleo/+bugs?field.tag=deployment-time Please
use it for bugs that affect deployment time or CI job run time. It will be
easier to handle them in the sprint.

Please provide your comments and suggestions.

Thanks



On Tue, May 23, 2017 at 1:47 PM, Sagi Shnaidman  wrote:

> Hi, all
>
> I'd like to propose an idea to make one or two days hackathon in TripleO
> project with main goal - to reduce deployment time of TripleO.
>
> - How could it be arranged?
>
> We can arrange a separate IRC channel and Bluejeans video conference
> session for hackathon in these days to create a "presence" feeling.
>
> - How to participate and contribute?
>
> We'll have a few responsibility fields like tripleo-quickstart,
> containers, storage, HA, baremetal, etc - the exact list should be ready
> before the hackathon so that everybody could assign to one of these
> "teams". It's good to have somebody in team to be stakeholder and
> responsible for organization and tasks.
>
> - What is the goal?
>
> The goal of this hackathon to reduce deployment time of TripleO as much as
> possible.
>
> For example part of CI team takes a task to reduce quickstart tasks time.
> It includes statistics collection, profiling and detection of places to
> optimize. After this tasks are created, patches are tested and submitted.
>
> The prizes will be presented to teams which saved most of time :)
>
> What do you think?
>
> Thanks
> --
> Best regards
> Sagi Shnaidman
>



-- 
Best regards
Sagi Shnaidman
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [forum] Future of Stackalytics

2017-06-08 Thread Jeremy Stanley
On 2017-06-08 09:22:48 -0400 (-0400), Paul Belanger wrote:
[...]
> We also have another issue where we loose access to gerrit and our apache
> process pins CPU to 100%, these might also be low hanging friut for people
> wanting to get involved.
[...]

Wasn't that fixed with a new lower-bound on paramiko so it now
closes SSH API sessions correctly?
-- 
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] Goodbye^W See you later

2017-06-08 Thread Sylvain Bauza


Le 08/06/2017 14:45, Jim Rollenhagen a écrit :
> Hey friends,
> 
> I've been mostly missing for the past six weeks while looking for a new
> job, so maybe you've forgotten me already, maybe not. I'm happy to tell
> you I've found one that I think is a great opportunity for me. But, I'm
> sad to tell you that it's totally outside of the OpenStack community.
> 
> The last 3.5 years have been amazing. I'm extremely grateful that I've
> been able to work in this community - I've learned so much and met so
> many awesome people. I'm going to miss the insane(ly awesome) level of
> collaboration, the summits, the PTGs, and even some of the bikeshedding.
> We've built amazing things together, and I'm sure y'all will continue to
> do so without me.
> 
> I'll still be lurking in #openstack-dev and #openstack-ironic for a
> while, if people need me to drop a -2 or dictate old knowledge or
> whatever, feel free to ping me. Or if you just want to chat. :)
> 
> <3 jroll
> 
> P.S. obviously my core permissions should be dropped now :P
> 
> 

I'm both sad and happy for you. Mixed feelings but I do think you are
definitly a very good soft and hard skilled person.
Best luck in your next position.

-Sylvain

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] Goodbye^W See you later

2017-06-08 Thread Edward Leafe
On Jun 8, 2017, at 7:45 AM, Jim Rollenhagen  wrote:

> I've been mostly missing for the past six weeks while looking for a new job, 
> so maybe you've forgotten me already, maybe not. I'm happy to tell you I've 
> found one that I think is a great opportunity for me. But, I'm sad to tell 
> you that it's totally outside of the OpenStack community.

Glad you found something new and interesting. I’m sure you’ll continue to do 
great things there!

-- Ed Leafe





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova][cinder] Is there interest in an admin-api to refresh volume connection info?

2017-06-08 Thread Matt Riedemann
Nova stores the output of the Cinder os-initialize_connection info API 
in the Nova block_device_mappings table, and uses that later for making 
volume connections.


This data can get out of whack or need to be refreshed, like if your 
ceph server IP changes, or you need to recycle some secret uuid for your 
ceph cluster.


I think the only ways to do this on the nova side today are via volume 
detach/re-attach, reboot, migrations, etc - all of which, except live 
migration, are disruptive to the running guest.


I've kicked around the idea of adding some sort of admin API interface 
for refreshing the BDM.connection_info on-demand if needed by an 
operator. Does anyone see value in this? Are operators doing stuff like 
this already, but maybe via direct DB updates?


We could have something in the compute API which calls down to the 
compute for an instance and has it refresh the connection_info from 
Cinder and updates the BDM table in the nova DB. It could be an admin 
action API, or part of the os-server-external-events API, like what we 
have for the 'network-changed' event sent from Neutron which nova uses 
to refresh the network info cache.


Other ideas or feedback here?

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [forum] Future of Stackalytics

2017-06-08 Thread Paul Belanger
On Wed, May 17, 2017 at 06:55:57PM +, Jeremy Stanley wrote:
> On 2017-05-17 16:16:30 +0200 (+0200), Thierry Carrez wrote:
> [...]
> > we need help with completing the migration to infra. If interested
> > you can reach out to fungi (Infra team PTL) nor mrmartin (who
> > currently helps with the transition work).
> [...]
> 
> The main blocker for us right now is addressed by an Infra spec
> (Stackalytics is an unofficial project and it's unclear to us where
> design discussions for it happen):
> 
> https://review.openstack.org/434951
> 
> In particular, getting the current Stackalytics developers on-board
> with things like this is where we've been failing to make progress
> mainly (I think) because we don't have a clear venue for discussions
> and they're stretched pretty thin with other work. If we can get
> some additional core reviewers for that project (and maybe even talk
> about turning it into an official team or joining them up as a
> deliverable for an existing team) that might help.
> -- 
> Jeremy Stanley

Agree with Jeremy.

We have been running a shadow instances of stackalytics[1] for 2 years now. So,
we could make the flip today to our community infrastructure if we wanted too.
The persistent cache would be a helpful thing to avoid potential re-imports of
all the data.

We also have another issue where we loose access to gerrit and our apache
process pins CPU to 100%, these might also be low hanging friut for people
wanting to get involved.

[1] http://stackalytics.openstack.org

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] Goodbye^W See you later

2017-06-08 Thread Pavlo Shchelokovskyy
Jim,

very sad to see you go :( Undeniably you played a major part in ironic and
its community becoming what they are today - an awesome project made by
awesome people.

Hopefully we'll have a chance to work together again someday :)

All the best in your new endeavors,
Pavlo.

On Thu, Jun 8, 2017 at 4:00 PM, milanisko k  wrote:

> Best of luck, jroll!
>
> <3
> milan
>
> čt 8. 6. 2017 v 14:46 odesílatel Jim Rollenhagen 
> napsal:
>
>> Hey friends,
>>
>> I've been mostly missing for the past six weeks while looking for a new
>> job, so maybe you've forgotten me already, maybe not. I'm happy to tell you
>> I've found one that I think is a great opportunity for me. But, I'm sad to
>> tell you that it's totally outside of the OpenStack community.
>>
>> The last 3.5 years have been amazing. I'm extremely grateful that I've
>> been able to work in this community - I've learned so much and met so many
>> awesome people. I'm going to miss the insane(ly awesome) level of
>> collaboration, the summits, the PTGs, and even some of the bikeshedding.
>> We've built amazing things together, and I'm sure y'all will continue to do
>> so without me.
>>
>> I'll still be lurking in #openstack-dev and #openstack-ironic for a
>> while, if people need me to drop a -2 or dictate old knowledge or whatever,
>> feel free to ping me. Or if you just want to chat. :)
>>
>> <3 jroll
>>
>> P.S. obviously my core permissions should be dropped now :P
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] Goodbye^W See you later

2017-06-08 Thread milanisko k
Best of luck, jroll!

<3
milan

čt 8. 6. 2017 v 14:46 odesílatel Jim Rollenhagen 
napsal:

> Hey friends,
>
> I've been mostly missing for the past six weeks while looking for a new
> job, so maybe you've forgotten me already, maybe not. I'm happy to tell you
> I've found one that I think is a great opportunity for me. But, I'm sad to
> tell you that it's totally outside of the OpenStack community.
>
> The last 3.5 years have been amazing. I'm extremely grateful that I've
> been able to work in this community - I've learned so much and met so many
> awesome people. I'm going to miss the insane(ly awesome) level of
> collaboration, the summits, the PTGs, and even some of the bikeshedding.
> We've built amazing things together, and I'm sure y'all will continue to do
> so without me.
>
> I'll still be lurking in #openstack-dev and #openstack-ironic for a while,
> if people need me to drop a -2 or dictate old knowledge or whatever, feel
> free to ping me. Or if you just want to chat. :)
>
> <3 jroll
>
> P.S. obviously my core permissions should be dropped now :P
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [ironic][nova] Goodbye^W See you later

2017-06-08 Thread Jim Rollenhagen
Hey friends,

I've been mostly missing for the past six weeks while looking for a new
job, so maybe you've forgotten me already, maybe not. I'm happy to tell you
I've found one that I think is a great opportunity for me. But, I'm sad to
tell you that it's totally outside of the OpenStack community.

The last 3.5 years have been amazing. I'm extremely grateful that I've been
able to work in this community - I've learned so much and met so many
awesome people. I'm going to miss the insane(ly awesome) level of
collaboration, the summits, the PTGs, and even some of the bikeshedding.
We've built amazing things together, and I'm sure y'all will continue to do
so without me.

I'll still be lurking in #openstack-dev and #openstack-ironic for a while,
if people need me to drop a -2 or dictate old knowledge or whatever, feel
free to ping me. Or if you just want to chat. :)

<3 jroll

P.S. obviously my core permissions should be dropped now :P
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [EXTERNAL] Re: [Tripleo] deploy software on Openstack controller on the Overcloud

2017-06-08 Thread Abhishek Kane
Updated https://github.com/abhishek-kane/puppet-veritas-hyperscale with the 
attempts to reuse puppet modules for keystone, rabbitmq and nova_flavor. Still 
doing some operation via scripts.

Regards.

On 6/7/17, 8:08 PM, "Abhishek Kane"  wrote:

Hi,

On cinder node- we need to modify the cinder.conf. We don’t change any 
config apart from this. We want to keep the config changes in heat template, 
package installation in puppet, and trigger rest of the operations via Horizon 
(as it’s done today). We are also trying to get rid of the nova.conf file 
changes. Once the approach for cinder is sorted, will get back on this.

If this is correct approach for cinder, I will raise review requests for 
the following projects:
puppet-tripleo: http://paste.openstack.org/show/611697/
puppet-cinder: http://paste.openstack.org/show/611698/
tripleo-heat-templates: http://paste.openstack.org/show/611700/

Also, I am not sure which TripleO repos need to be patched for the 
controller components.

We have decomposed the controller bin installer into idempotent 
modules/scripts. Now, the installer is not a black box operation:
https://github.com/abhishek-kane/puppet-veritas-hyperscale
The inline replies below are w.r.t. this project. The product installer bin 
currently works in atomic fashion. One issue which we see in puppet is the 
error handling and rollback operations.

Thanks,
Abhishek

On 6/1/17, 8:41 PM, "Emilien Macchi"  wrote:

On Thu, Jun 1, 2017 at 3:47 PM, Abhishek Kane 
 wrote:
> Hi Emilien,
>
> The bin does following things on controller:
> 1. Install core HyperScale packages.

Should be done by Puppet, with Package resource.
Ak> It’s done.

> 2. Start HyperScale API server

Should be done by Puppet, with Service resource.
AK> It’s done.

> 3. Install UI packages. This will add new files to and modify some 
existing files of Horison.

Should be done by Puppet, with Package resource and also some changes
in puppet-horizon maybe if you need to change Horizon config.
Ak> We have got rid of the horizon dependency right now. Our GUI components 
get installed via separate package.

> 4. Create HyperScale user in mysql db. Create database and dump 
config. Add permissions of nova and cinder DB to HyperScale user.

We have puppet-openstacklib which already manages DBs, you could
easily re-use it. Please look at puppet-nova for example to see how
things works in nova::db::mysql, etc.
AK> TBD

> 5. Add ceilometer pollsters for additional stats and modify 
ceilometer files.

puppet-ceilometer I guess. What do you mean by "files"? Config files?
Ak> We are trying to get rid of this dependency as well. TBD.

> 6. Change OpenStack configuration:
> a. Create rabbitmq exchanges

puppet-* modules already does it.
AK> It’s done via script. Do we need to patch any module?
 
> b. Create keystone user

puppet-keystone already does it.
AK> It’s done via script. Do we need to patch keystone module?

> c. Define new flavors

puppet-nova can manage flavors.
AK> It’s done via script. Do we need to patch nova module?

> d. Create HyperScale volume type and set default volume type to 
HyperScale in cinder.conf.

we already support multi backends in tripleo, HyperScale would just be
a new addition. Re-use the bits please: puppet-cinder and
puppet-tripleo will need to be patched.
AK> It’s done via script. Do we need to patch cinder module?
  
> e. Restart openstack’s services

Already done by openstack/puppet-* modules.
AK> We are trying to get rid of all OpenStack config file changes that we 
used to do. TBD.

> 7. Configure HyperScale services

Should be done by your module, (you can either write a _config
provider if it's ini standard otherwise just do a template that you
ship in the module, like puppet-horizon).
AK> It’s done.

> Once the controller is configured, we use HyperScale’s CLI to 
configure data and compute nodes-
>
> On data node (cinder):
> 1. Install HyperScale data node packages.

Should be done by Puppet, with Package resource.
 
> 2. Change cinder.conf to add backend and change rpc_backend.

puppet-cinder

> 3. Give the raw data disks and meta disks to HyperScale storage layer 
for usage.

what does it means? Do you run a CLI for that?

> 4. Configure HyperScale services.

  

Re: [openstack-dev] [vitrage-dashboard] Alarm Header Blueprint

2017-06-08 Thread Afek, Ifat (Nokia - IL/Kfar Sava)
Looks great ☺
Since you wrote a long and impressive description, it might be worthwhile to 
push it to gerrit after all. As I said, we usually don’t push vitrage-dashboard 
specs to gerrit, so it’s really up to you.

Best Regards,
Ifat.


From: "Waines, Greg" 
Date: Thursday, 8 June 2017 at 14:58
To: "OpenStack Development Mailing List (not for usage questions)" 

Cc: "Heller, Alon (Nokia - IL/Kfar Sava)" , "Afek, Ifat 
(Nokia - IL/Kfar Sava)" 
Subject: [openstack-dev] [vitrage-dashboard] Alarm Header Blueprint

I have registered a new blueprint in Vitrage-dashboard which leverages the 
proposed extensible headers of Horizon.

https://blueprints.launchpad.net/vitrage-dashboard/+spec/alarm-header

let me know your thoughts,
Greg


p.s proposed extensible header blueprint in horizon is here:
   https://blueprints.launchpad.net/horizon/+spec/extensible-header



From: "Afek, Ifat (Nokia - IL/Kfar Sava)" 
Date: Wednesday, June 7, 2017 at 4:52 AM
To: Greg Waines 
Cc: "Heller, Alon (Nokia - IL/Kfar Sava)" 
Subject: Re: vitrage-dashboard blueprints / specs

Hi Greg,

Adding Alon, a vitrage-dashboard core contributor.

In general, your plan seems great ☺ indeed, we don’t have many blueprints in 
vitrage-dashboard launchapd… the vitrage-dashboard team usually just implement 
the features and do code reviews without too many specs. You are welcome to 
write a blueprint-only description, and maybe send us (or add to the 
blueprintß) a UI mock.

Let us know if you need any help with that.

Best Regards,
Ifat.


From: "Waines, Greg" 
Date: Wednesday, 7 June 2017 at 1:33
To: "Afek, Ifat (Nokia - IL/Kfar Sava)" 
Subject: vitrage-dashboard blueprints / specs

Ifat,
Vitrage-dashboard seems to have very short 1-2 sentence blueprints and no spec 
files.

The Vitrage Alarm Count in Horizon Header work has turned into 3x blueprints now
- alarm-counts-api  in  Vitrage
- extensible-headers  in  Horizon <-- I am in process of 
submitting this to Horizon
- alarm-header  in  Vitrage-Dashboard

What do you suggest for the blueprint / spec for Vitrage-Dashboard ?
I could submit a blueprint using the blueprint-only template used by Horizon.

let me know what you think,
Greg.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tripleo][ironic] Hardware provisioning testing for Ocata

2017-06-08 Thread Justin Kilpatrick
Morning everyone,

I've been working on a performance testing tool for TripleO hardware
provisioning operations off and on for about a year now and I've been
using it to try and collect more detailed data about how TripleO
performs in scale and production use cases. Perhaps more importantly
YODA (Yet Openstack Deployment Tool, Another) automates the task
enough that days of deployment testing is a set it and forget it
operation.

You can find my testing tool here [0] and the test report [1] has
links to raw data and visualization. Just scroll down, click the
capcha and click "go to kibana". I  still need to port that machine
from my own solution over to search guard.

If you have too much email to consider clicking links I'll copy the
results summary here.

TripleO inspection workflows have seen massive improvements from
Newton with a failure rate for 50 nodes with the default workflow
falling from 100% to <15%. Using patches slated for Pike that spurious
failure rate reaches zero.

Overcloud deployments show a significant improvement of deployment
speed in HA and stack update tests.

Ironic deployments in the overcloud allow the use of Ironic for bare
metal scale out alongside more traditional VM compute. Considering a
single conductor starts to struggle around 300 nodes it will be
difficult to push a multi conductor setup to it's limits.

Finally Ironic node cleaning, shows a similar failure rate to
inspection and will require similar attention in TripleO workflows to
become painless.

[0] https://review.openstack.org/#/c/384530/
[1] 
https://docs.google.com/document/d/194ww0Pi2J-dRG3-X75mphzwUZVPC2S1Gsy1V0K0PqBo/

Thanks for your time!

- Justin

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][ceilometer][gnocchi] Query on adding new meters to Gnocchi

2017-06-08 Thread Mehdi Abaakouk

On Thu, Jun 08, 2017 at 08:30:32AM +, Deepthi V V wrote:

Thanks Mehdi for the information. I will soon upload a spec for adding the 
meters.


We don't use spec, just open a bug, or directly send patches.


Thanks,
Deepthi

-Original Message-
From: Mehdi Abaakouk [mailto:sil...@sileht.net]
Sent: Thursday, June 08, 2017 1:44 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [telemetry][ceilometer][gnocchi] Query on adding 
new meters to Gnocchi

Hi,

On Thu, Jun 08, 2017 at 05:35:43AM +, Deepthi V V wrote:

Hi,

I am trying to add new meters/resource types in gnocchi. I came across 2 files:
Gnocchi_resources.yaml and ceilometer_update script which will make Gnocchi api 
calls for resource_type addition.
I have a few queries. Could you please clarify them.


 1.  Is it sufficient to add the resource types only in gnocchi_resources.yaml 
file.


No, you also need to create resource type with the Gnocchi API.


 2.  OR is the ceilometer_update script also required to be modified. Is this 
script responsible for defining attributes in metadata.


This script is only for Ceilometer supported resource types. We do not support 
upgrade if this script is changed or if you add/remove attributes to Ceilometer 
resource types.


 3.  If I have to perform function done by step 2, as an alternative to 
updating the script, is it correct to bring up the system in following order
*   Change gnocchi_resources.yaml for new resource types.
*   Start ceilometer and gnocchi processes.
*   Execute Gnocchi REST apis to create new resource types.


I see to two solutions depending of your use case:

* if your new resource types aim to support a not yet handled Openstack
 resource. You should consider to contribute upstream to update
 ceilometer-upgrade and gnocchi_resources.yaml

* if not, then option 3 is the good way to go.

Regards,

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [neutron] plumbing in request-id passing back to nova

2017-06-08 Thread Sean Dague
We've now got Nova -> Glance, Neutron, Cinder. Cinder -> Nova, Glance is
up for review. I started looking at the Neutron code for this, and it's
wildly different, so need some help on what the right way forward is.

It appears that the novaclient construction happens only once per run
inside this Notifier construction -
https://github.com/openstack/neutron/blob/8d9fcb2d3037004cd1ad5136c449d80cdc5a5865/neutron/notifiers/nova.py#L47-L77
(where there is no context available).

Also, it appears that the way these API calls are emitted is through
implicit binds in the DBPlugin
(https://github.com/openstack/neutron/blob/8d9fcb2d3037004cd1ad5136c449d80cdc5a5865/neutron/db/db_base_plugin_v2.py#L152-L166)
so they are happening potentially well outside of any active context.

So... the questions, in order:

1) is construction path so disconnect from request processing that we've
got to give up on that pattern (my guess is yes).

2) when these events are emitted is there some way to have access to a
the context explicitly or do we have to do the magic reach back into the
tls for it?

3) is there any chance in the case of Nova -> Neutron -> Nova that we're
going to be able to keep track of the global_request_id coming from Nova
originally, so that the admin events coming back to Nova are tagged with
the original Nova initiating request?

Any advise here, or help in understanding is very welcomed. Thanks in
advance.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [vitrage-dashboard] Alarm Header Blueprint

2017-06-08 Thread Waines, Greg
I have registered a new blueprint in Vitrage-dashboard which leverages the 
proposed extensible headers of Horizon.

https://blueprints.launchpad.net/vitrage-dashboard/+spec/alarm-header

let me know your thoughts,
Greg


p.s proposed extensible header blueprint in horizon is here:
   https://blueprints.launchpad.net/horizon/+spec/extensible-header



From: "Afek, Ifat (Nokia - IL/Kfar Sava)" 
Date: Wednesday, June 7, 2017 at 4:52 AM
To: Greg Waines 
Cc: "Heller, Alon (Nokia - IL/Kfar Sava)" 
Subject: Re: vitrage-dashboard blueprints / specs

Hi Greg,

Adding Alon, a vitrage-dashboard core contributor.

In general, your plan seems great ☺ indeed, we don’t have many blueprints in 
vitrage-dashboard launchapd… the vitrage-dashboard team usually just implement 
the features and do code reviews without too many specs. You are welcome to 
write a blueprint-only description, and maybe send us (or add to the 
blueprintß) a UI mock.

Let us know if you need any help with that.

Best Regards,
Ifat.


From: "Waines, Greg" 
Date: Wednesday, 7 June 2017 at 1:33
To: "Afek, Ifat (Nokia - IL/Kfar Sava)" 
Subject: vitrage-dashboard blueprints / specs

Ifat,
Vitrage-dashboard seems to have very short 1-2 sentence blueprints and no spec 
files.

The Vitrage Alarm Count in Horizon Header work has turned into 3x blueprints now
- alarm-counts-api  in  Vitrage
- extensible-headers  in  Horizon <-- I am in process of 
submitting this to Horizon
- alarm-header  in  Vitrage-Dashboard

What do you suggest for the blueprint / spec for Vitrage-Dashboard ?
I could submit a blueprint using the blueprint-only template used by Horizon.

let me know what you think,
Greg.


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [horizon] Extensible Header Blueprint

2017-06-08 Thread Waines, Greg
I have registered a new blueprint to address the extensible header functionality
discussed below.

https://blueprints.launchpad.net/horizon/+spec/extensible-header

interested in your feedback,
Greg.


From: David Lyle 
Reply-To: "openstack-dev@lists.openstack.org" 

Date: Thursday, June 1, 2017 at 12:50 PM
To: "openstack-dev@lists.openstack.org" 
Subject: Re: [openstack-dev] [horizon] Blueprint process question

There have been a couple of projects that would like some space in the
page header. I think the work in Horizon is to provide an extensible
space in the page header for plugins to post content. The UI plugin
for Vitrage, in this case, would then be responsible for populating
that content if desired. This specific blueprint should really be
targeted at the Vitrage UI plugin and a separate blueprint should be
added to Horizon to create the extension point in the page header.

David

On Wed, May 31, 2017 at 11:06 AM, Waines, Greg
> wrote:
Hey Rob,



Just thought I’d check in on whether Horizon team has had a chance to review
the following blueprint:

https://blueprints.launchpad.net/horizon/+spec/vitrage-alarm-counts-in-topnavbar



The blueprint in Vitrage which the above Horizon blueprint depends on has
been approved by Vitrage team.

i.e.   https://blueprints.launchpad.net/vitrage/+spec/alarm-counts-api



let me know if you’d like to setup a meeting to discuss,

Greg.



From: Rob Cresswell 
>
Reply-To: 
"openstack-dev@lists.openstack.org"
>
Date: Thursday, May 18, 2017 at 11:40 AM
To: 
"openstack-dev@lists.openstack.org" 
>
Subject: Re: [openstack-dev] [horizon] Blueprint process question



There isn't a specific time for blueprint review at the moment. It's usually
whenever I get time, or someone asks via email or IRC. During the weekly
meetings we always have time for open discussion of bugs/blueprints/patches
etc.



Rob



On 18 May 2017 at 16:31, Waines, Greg 
> wrote:

A blueprint question for horizon team.



I registered a new blueprint the other day.

https://blueprints.launchpad.net/horizon/+spec/vitrage-alarm-counts-in-topnavbar



Do I need to do anything else to get this reviewed?  I don’t think so, but
wanted to double check.

How frequently do horizon blueprints get reviewed?  once a week?



Greg.





p.s. ... the above blueprint does depend on a Vitrage blueprint which I do
have in review.




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [ci] Adding idempotency job on overcloud deployment.

2017-06-08 Thread Sofer Athlan-Guyot
Hi,

Alex Schultz  writes:

> On Wed, Jun 7, 2017 at 5:20 AM, Sofer Athlan-Guyot  
> wrote:
>> Hi,
>>
>> Emilien Macchi  writes:
>>
>>> On Wed, Jun 7, 2017 at 12:45 PM, Sofer Athlan-Guyot  
>>> wrote:
 Hi,

 I don't think we have such a job in place.  Basically that would check
 that re-running the "openstack deploy ..." command won't do anything.

I've had a look at openstack-infra/tripleo-ci.  Should I test it in with
ovb/quickstart or tripleo.sh.  Both way are fine by me, but I may be
lacking context about which one is more relevant.

 We had such an error by the past[1], but I'm not sure this has been
 captured by an associated job.

 WDYT ?
>>>
>>> It would be interesting to measure how much time does it take to run
>>> it again.
>>
>> Could you point out how such an experiment could be done ?
>>
>>> If it's short, we could add it to all our scenarios + ovb
>>> jobs.  If it's long, maybe we need an additional job, but it would
>>> take more resources, so maybe we could run it in periodic pipeline
>>> (note that periodic jobs are not optimal since we could break
>>> something quite easily).
>>
>> Just adding as context that the issue was already raised[1].  Beside
>> time constraint, it was pointed out that we would also need to parse the
>> log to find out if anything was restarted.  But it could be a second
>> step.  For parsing, this code was pointed out[2].
>>
>
> There's a few things that would need to be enabled in order to reuse
> some of this work.  We'll need to add the ability to generate a report
> on the puppet run[0]. And then we'll need to be able to capture it[1]
> somewhere that we could then use that parsing code on.  From there,
> just rerunning the installation would be a simple start to the
> idempotency check.  In fuel, we had hacked in a special flag[2] that
> we used in testing to actually rerun the task immediately to find when
> a specific task was not idempotent in addition to also rerunning the
> entire deployment. For tripleo a similar concept would be to rerun the
> steps twice but that's usually not where the issues crop us for us. So
> rerunning the entire installation deployment would be better as we
> tend to have issues with configuration items between steps
> conflicting.

Maybe we could go with something equivalent to:

  ts="$(date '+%F %T')"
  ... re-run deploy command ...
  
  sudo journalctl --since="${ts}" | egrep 'Stopping|Starting' | grep -v 
'user.*slice' > restarted.log
  wc -l restarted.log

This should be 0 on every overcloud nodes.

This is simpler to implement and should catch any unwanted service
restart.

WDYT ?

>
> Thanks,
> -Alex
>
> [0] https://review.openstack.org/#/c/273740/4/mcagents/puppetd.rb@204
> [1] https://review.openstack.org/#/c/273740/4/mcagents/puppetd.rb@102
> [2] https://review.openstack.org/#/c/273737/
>
>> [1] http://lists.openstack.org/pipermail/openstack-dev/2017-March/114836.html
>> [2] 
>> https://review.openstack.org/#/c/279271/9/fuelweb_test/helpers/astute_log_parser.py@212
>>
>>>
 [1] https://bugs.launchpad.net/tripleo/+bug/1664650
 --
 Sofer Athlan-Guyot

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>>
>>> --
>>> Emilien Macchi
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> --
>> Sofer Athlan-Guyot
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Thanks,
-- 
Sofer Athlan-Guyot

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Swift3 Plugin Development

2017-06-08 Thread Venkata R Edara

Hello,

we  have storage product called Gluster which is file storage system, we 
are looking to support S3 APIs for it.


openstack-swift project has the swift3 plugin to support S3 APIs , but 
according to https://github.com/openstack/swift3/blob/master/README.md


we see that S3 ACL's are still under development and not in production 
so far.


we are looking for S3 plugin with ACLS so that we can integrate gluster 
with that.


Are there any plans from dev community to develop full featured S3 ACL's 
into swift3 plugin? . if yes, please let us know the release version and 
time-frame.


Thanks in Advance

-Venkata R Edara





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] etcd3 as base service - update

2017-06-08 Thread Davanum Srinivas
Josh,

Of the initial targets, we have covered the use of tooz and oslo.cache
with etcd already. One thing that remains is Jay's os-lively PoC [1].

So +1 to more knowledgeable folks chiming in with stuff that can be
done and other scenarios. I'll let nature take its course here :)

Thanks,
Dims

[1] https://github.com/jaypipes/os-lively/

On Thu, Jun 8, 2017 at 12:47 AM, Joshua Harlow  wrote:
> So just out of curiosity, but do people really even know what etcd is good
> for? I am thinking that there should be some guidance from folks in the
> community as to where etcd should be used and where it shouldn't (otherwise
> we just all end up in a mess).
>
> Perhaps a good idea to actually give examples of how it should be used, how
> it shouldn't be used, what it offers, what it doesn't... Or at least provide
> links for people to read up on this.
>
> Thoughts?
>
>
> Davanum Srinivas wrote:
>>
>> One clarification: Since https://pypi.python.org/pypi/etcd3gw just
>> uses the HTTP API (/v3alpha) it will work under both eventlet and
>> non-eventlet environments.
>>
>> Thanks,
>> Dims
>>
>>
>> On Wed, Jun 7, 2017 at 6:47 AM, Davanum Srinivas
>> wrote:
>>>
>>> Team,
>>>
>>> Here's the update to the base services resolution from the TC:
>>> https://governance.openstack.org/tc/reference/base-services.html
>>>
>>> First request is to Distros, Packagers, Deployers, anyone who
>>> installs/configures OpenStack:
>>> Please make sure you have latest etcd 3.x available in your
>>> environment for Services to use, Fedora already does, we need help in
>>> making sure all distros and architectures are covered.
>>>
>>> Any project who want to use etcd v3 API via grpc, please use:
>>> https://pypi.python.org/pypi/etcd3 (works only for non-eventlet services)
>>>
>>> Those that depend on eventlet, please use the etcd3 v3alpha HTTP API
>>> using:
>>> https://pypi.python.org/pypi/etcd3gw
>>>
>>> If you use tooz, there are 2 driver choices for you:
>>> https://github.com/openstack/tooz/blob/master/setup.cfg#L29
>>> https://github.com/openstack/tooz/blob/master/setup.cfg#L30
>>>
>>> If you use oslo.cache, there is a driver for you:
>>> https://github.com/openstack/oslo.cache/blob/master/setup.cfg#L33
>>>
>>> Devstack installs etcd3 by default and points cinder to it:
>>> http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/etcd3
>>> http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/cinder#n356
>>>
>>> Review in progress for keystone to use etcd3 for caching:
>>> https://review.openstack.org/#/c/469621/
>>>
>>> Doug is working on proposal(s) for oslo.config to store some
>>> configuration in etcd3:
>>> https://review.openstack.org/#/c/454897/
>>>
>>> So, feel free to turn on / test with etcd3 and report issues.
>>>
>>> Thanks,
>>> Dims
>>>
>>> --
>>> Davanum Srinivas :: https://twitter.com/dims
>>
>>
>>
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Proposed changes to the team meeting time

2017-06-08 Thread Chandan kumar
On Thu, Jun 8, 2017 at 4:04 PM, Andrea Frittoli
 wrote:
> I proposed a change to irc-meetings [1] to move the meeting to 8:00 UTC.
> The first 8:00 UTC meeting will be next week June 15th.
>
> [1] https://review.openstack.org/472194
>

It will be helpful.

Thanks,

Chandan Kumar

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [QA] Proposed changes to the team meeting time

2017-06-08 Thread Andrea Frittoli
I proposed a change to irc-meetings [1] to move the meeting to 8:00 UTC.
The first 8:00 UTC meeting will be next week June 15th.

[1] https://review.openstack.org/472194

On Mon, May 29, 2017 at 6:01 AM Masayuki Igawa  wrote:

> Thanks Andrea,
>
> +1
> But my only one concern is this change is not good for you, right? Of
> course, you don't need to attend both meetings, though :)
>

It should not be a problem for me to join in most cases.

andrea


>
> --
>   Masayuki Igawa
>
>
>
> On Fri, May 26, 2017, at 10:41 AM, zhu.fang...@zte.com.cn wrote:
>
>
> +1, thanks!
>
>
> zhufl
>
>
>
>
> Original Mail
> *Sender: * <andrea.fritt...@gmail.com>;
> *To: * <openstack-dev@lists.openstack.org>;
> *Date: *2017/05/25 21:19
> *Subject: **[openstack-dev] [QA] Proposed changes to the team meeting
> time*
>
>
> Hello team,
>
> our current QA team meeting schedule alternates between 9:00 UTC and 17:00
> UTC.
> The 9:00 meetings is a bit towards the end of the day for out contributors
> in APAC, so I'm proposing to move the meeting to 8:00 UTC.
>
> Please respond with +1 / -1 and/or comments, I will leave the poll open
> for about 10 days to make sure everyone interested gets a chance to comment.
>
> Thank you
>
> andrea
>
>
>
>
> *__*
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] pep8 breakage

2017-06-08 Thread Gary Kotton
Hi,
That patch will now need to be tweaked so that we can enable the requirements 
to be bumped. Gate is back to usual after the requirements were reverted.
Thanks
Gary

On 6/7/17, 9:40 PM, "Ihar Hrachyshka"  wrote:

UPD: in case you wonder if that's fixed, we are waiting for 
https://review.openstack.org/#/c/471763/ to land.

Ihar


On 06/07/2017 05:23 AM, Akihiro Motoki wrote:
> Six 1.10.0 is not the root cause. The root cause is the version bump
> of pylint (and astroid).
> Regarding pylint and astroid, I think the issue will go once
> https://review.openstack.org/#/c/469491/ is merged.
> However, even after the global requirement is merged, neutron pylint will 
fail
> because pylint 1.7.1 has a bit different syntax check compared to pylint 
1.4.3.
>
> I wonder pylint version bump should be announced because it
> potentially breaks individual project gate.
>
> Is it better to revert pylint version bump in global-requirements, or
> just to ignore some pylint rules temporarily in neutron?
>
> Akihiro
>
>
> 2017-06-07 20:43 GMT+09:00 Gary Kotton :
>> Hi,
>>
>> Please see bug https://bugs.launchpad.net/neutron/+bug/1696403. Seems 
like
>> six 1.10.0 has broken us.
>>
>> I have posted a patch in the requirements project. Not 100% sure that 
this
>> is the right way to go. At least that will enable us to address this in
>> neutron.
>>
>> Thanks
>>
>> Gary
>>
>>
>> 
__
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] [nova] [kuryr] issue with host to cell mapping on ocata

2017-06-08 Thread Faseela K
Hi,

   I am working on testing kuryr-kubernetes plugin with opendaylight, using 
devstack-ocata, with a 3 node setup.
   I am hitting the below error when I execute "nova show ${vm_name} | grep 
OS-EXT-STS:vm_state", once the vm is booted.

   Keyword 'Verify VM Is ACTIVE' failed after retrying for 15 seconds. The last 
error was: 'ERROR (ClientException): Unexpected API Error. Please report this 
at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. 
 (HTTP 500) (Request-ID: 
req-cead4e05-4f64-4e9c-8bf8-582de2d9d633) Command Returns 1

  I am seeing host cell mapping related errors in the control node n-cond logs 
[0] :

2017-06-08 08:52:01.970 18963 ERROR nova.conductor.manager 
[req-6df07c12-3f8a-4da6-9ecd-f6b61bbc4af6 admin admin] No host-to-cell mapping 
found for selected host sandbox-57023-15-devstack-ocata-2. Setup is incomplete.

Also on my console, I see the below error while using cell mapping script as 
given in [3]


'nova'), ('version_table', 'migrate_version'), ('required_dbs', '[]')]))]) 
__init__ /usr/lib/python2.7/site-packages/migrate/versioning/repository.py:83

ERROR: could not access cell mapping database - has api db been created?

  I do have placement sections configured in nova.conf [1], and placement-api 
and client plugins enabled as needed in the respective local.confs.
  Is there any configuration that I have missed out?

Thanks,
Faseela

[0] 
https://logs.opendaylight.org/sandbox/jenkins091/coe-csit-1node-openstack-ocata-kubernetes-carbon/15/archives/control_1/

[1] 
https://logs.opendaylight.org/sandbox/jenkins091/coe-csit-1node-openstack-ocata-kubernetes-carbon/15/archives/control_1/nova.conf.gz
  
https://logs.opendaylight.org/sandbox/jenkins091/coe-csit-1node-openstack-ocata-kubernetes-carbon/15/archives/compute_1/nova.conf.gz
  
https://logs.opendaylight.org/sandbox/jenkins091/coe-csit-1node-openstack-ocata-kubernetes-carbon/15/archives/compute_2/nova.conf.gz


[2] 
https://logs.opendaylight.org/sandbox/jenkins091/coe-csit-1node-openstack-ocata-kubernetes-carbon/15/archives/control_1/local.conf.gz
  
https://logs.opendaylight.org/sandbox/jenkins091/coe-csit-1node-openstack-ocata-kubernetes-carbon/15/archives/compute_2/local.conf.gz
 
https://logs.opendaylight.org/sandbox/jenkins091/coe-csit-1node-openstack-ocata-kubernetes-carbon/15/archives/compute_1/local.conf.gz

[3] 
https://ask.openstack.org/en/question/102256/how-to-configure-placement-service-for-compute-node-on-ocata/


Thanks,
Faseela
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [publiccloud-wg] Summary meeting 7th June

2017-06-08 Thread Tobias Rydberg

Hi folks,

Here comes a summary from yesterdays meeting for the Public Cloud 
Working Group.


The missing features document now got status and priority columns. 
Definitions for each column will be added during the week as well, but 
pretty self-explaining.


We really encourage you all to add new entries to the list, but also to 
add your company name to "Interested Parties"-column if this is an 
important feature for you. This will make the priority easier and more 
accurate.


Our goal here is to keep this document as up to date as possible, so 
feel free to add specs, add comments or whatever to help out with that.


We will continue for now with these summary email, and post them to dev, 
operation and user-committee after each meeting. Happy to get feedback 
if you think this is good or not.


Sean Handley was proposed as third co-chair for the group, and accepted 
as well with 100% in favor in the voting =). We welcome Sean, will be a 
great asset for the group to have him even more involved!


The rest of the meeting was discussions around goals for the current 
cycle. Cycle in the case of this working group is "summit cycles", this 
due to that right now not many representatives in the group attend the 
PTGs.


Instead of writing all suggested goals here, I encourage you to visit 
the etherpad for this 
(https://etherpad.openstack.org/p/SYDNEY_GOALS_publiccloud-wg). Feel 
free to add new goals or just add +1 at some goals. Please add your name 
in the bottom as well.


Some good discussions did take place in #openstack-publiccloud - please 
continue to log in there and start discussions between the meetings as well!


Next meeting: June 21st 1400 UTC #openstack-meeting-3


Best regards,
PublicCloudWG via tobberydberg

--
Tobias Rydberg
tob...@citynetwork.se



smime.p7s
Description: S/MIME Cryptographic Signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] etcd3 as base service - update

2017-06-08 Thread Thierry Carrez
Joshua Harlow wrote:
> So just out of curiosity, but do people really even know what etcd is
> good for? I am thinking that there should be some guidance from folks in
> the community as to where etcd should be used and where it shouldn't
> (otherwise we just all end up in a mess).
> 
> Perhaps a good idea to actually give examples of how it should be used,
> how it shouldn't be used, what it offers, what it doesn't... Or at least
> provide links for people to read up on this.
> 
> Thoughts?

I think that's a great idea. More generally, we should document base
services, the benefits of each tech, and what data should be in
database, what data should be in a message queue, and what data should
be in etcd. It feels like the Project Team Guide could be a place where
such information could live...

-- 
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova][Scheduler]

2017-06-08 Thread Narendra Pal Singh
Hello,

Does Ocata bits support adding custom resource monitor say network
bandwidth?
Nova Scheduler should consider new metric data for cost calculation each
filtered host.

-- 
Regards,
NPS.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [telemetry][ceilometer][gnocchi] Query on adding new meters to Gnocchi

2017-06-08 Thread Deepthi V V
Thanks Mehdi for the information. I will soon upload a spec for adding the 
meters.

Thanks,
Deepthi

-Original Message-
From: Mehdi Abaakouk [mailto:sil...@sileht.net] 
Sent: Thursday, June 08, 2017 1:44 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: Re: [openstack-dev] [telemetry][ceilometer][gnocchi] Query on adding 
new meters to Gnocchi

Hi,

On Thu, Jun 08, 2017 at 05:35:43AM +, Deepthi V V wrote:
>Hi,
>
>I am trying to add new meters/resource types in gnocchi. I came across 2 files:
>Gnocchi_resources.yaml and ceilometer_update script which will make Gnocchi 
>api calls for resource_type addition.
>I have a few queries. Could you please clarify them.
>
>
>  1.  Is it sufficient to add the resource types only in 
> gnocchi_resources.yaml file.

No, you also need to create resource type with the Gnocchi API.

>  2.  OR is the ceilometer_update script also required to be modified. Is this 
> script responsible for defining attributes in metadata.

This script is only for Ceilometer supported resource types. We do not support 
upgrade if this script is changed or if you add/remove attributes to Ceilometer 
resource types.

>  3.  If I have to perform function done by step 2, as an alternative to 
> updating the script, is it correct to bring up the system in following order
> *   Change gnocchi_resources.yaml for new resource types.
> *   Start ceilometer and gnocchi processes.
> *   Execute Gnocchi REST apis to create new resource types.

I see to two solutions depending of your use case:

* if your new resource types aim to support a not yet handled Openstack
  resource. You should consider to contribute upstream to update
  ceilometer-upgrade and gnocchi_resources.yaml

* if not, then option 3 is the good way to go.

Regards,

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >