Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes

2018-11-30 Thread Fox, Kevin M
Still confused by:
[base] -> [service] -> [+ puppet]
not:
[base] -> [puppet]
and
[base] -> [service]
?

Thanks,
Kevin

From: Bogdan Dobrelya [bdobr...@redhat.com]
Sent: Friday, November 30, 2018 5:31 AM
To: Dan Prince; openstack-dev@lists.openstack.org; 
openstack-disc...@lists.openstack.org
Subject: Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers 
for security and size of images (maintenance) sakes

On 11/30/18 1:52 PM, Dan Prince wrote:
> On Fri, 2018-11-30 at 10:31 +0100, Bogdan Dobrelya wrote:
>> On 11/29/18 6:42 PM, Jiří Stránský wrote:
>>> On 28. 11. 18 18:29, Bogdan Dobrelya wrote:
 On 11/28/18 6:02 PM, Jiří Stránský wrote:
> 
>
>> Reiterating again on previous points:
>>
>> -I'd be fine removing systemd. But lets do it properly and
>> not via 'rpm
>> -ev --nodeps'.
>> -Puppet and Ruby *are* required for configuration. We can
>> certainly put
>> them in a separate container outside of the runtime service
>> containers
>> but doing so would actually cost you much more
>> space/bandwidth for each
>> service container. As both of these have to get downloaded to
>> each node
>> anyway in order to generate config files with our current
>> mechanisms
>> I'm not sure this buys you anything.
>
> +1. I was actually under the impression that we concluded
> yesterday on
> IRC that this is the only thing that makes sense to seriously
> consider.
> But even then it's not a win-win -- we'd gain some security by
> leaner
> production images, but pay for it with space+bandwidth by
> duplicating
> image content (IOW we can help achieve one of the goals we had
> in mind
> by worsening the situation w/r/t the other goal we had in
> mind.)
>
> Personally i'm not sold yet but it's something that i'd
> consider if we
> got measurements of how much more space/bandwidth usage this
> would
> consume, and if we got some further details/examples about how
> serious
> are the security concerns if we leave config mgmt tools in
> runtime
> images.
>
> IIRC the other options (that were brought forward so far) were
> already
> dismissed in yesterday's IRC discussion and on the reviews.
> Bin/lib bind
> mounting being too hacky and fragile, and nsenter not really
> solving the
> problem (because it allows us to switch to having different
> bins/libs
> available, but it does not allow merging the availability of
> bins/libs
> from two containers into a single context).
>
>> We are going in circles here I think
>
> +1. I think too much of the discussion focuses on "why it's bad
> to have
> config tools in runtime images", but IMO we all sorta agree
> that it
> would be better not to have them there, if it came at no cost.
>
> I think to move forward, it would be interesting to know: if we
> do this
> (i'll borrow Dan's drawing):
>
>> base container| --> |service container| --> |service
>> container w/
> Puppet installed|
>
> How much more space and bandwidth would this consume per node
> (e.g.
> separately per controller, per compute). This could help with
> decision
> making.

 As I've already evaluated in the related bug, that is:

 puppet-* modules and manifests ~ 16MB
 puppet with dependencies ~61MB
 dependencies of the seemingly largest a dependency, systemd
 ~190MB

 that would be an extra layer size for each of the container
 images to be
 downloaded/fetched into registries.
>>>
>>> Thanks, i tried to do the math of the reduction vs. inflation in
>>> sizes
>>> as follows. I think the crucial point here is the layering. If we
>>> do
>>> this image layering:
>>>
 base| --> |+ service| --> |+ Puppet|
>>>
>>> we'd drop ~267 MB from base image, but we'd be installing that to
>>> the
>>> topmost level, per-component, right?
>>
>> Given we detached systemd from puppet, cronie et al, that would be
>> 267-190MB, so the math below would be looking much better
>
> Would it be worth writing a spec that summarizes what action items are
> bing taken to optimize our base image with regards to the systemd?

Perhaps it would be. But honestly, I see nothing biggie to require a
full blown spec. Just changing RPM deps and layers for containers
images. I'm tracking systemd changes here [0],[1],[2], btw (if accepted,
it should be working as of fedora28(or 29) I hope)

[0] https://review.rdoproject.org/r/#/q/topic:base-container-reduction
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1654659
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1654672


>
> It seems like the general consenses is that cleaning up some of the RPM
> dependencies so that we don't install Systemd is the biggest win.
>
> What confuses me is why are there still 

Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes

2018-11-29 Thread Fox, Kevin M
Oh, rereading the conversation again, the concern is having shared deps move up 
layers? so more systemd related then ruby?

The conversation about --nodeps makes it sound like its not actually used. Just 
an artifact of how the rpms are built... What about creating a dummy package 
that provides(systemd)? That avoids using --nodeps.

Thanks,
Kevin

From: Fox, Kevin M [kevin@pnnl.gov]
Sent: Thursday, November 29, 2018 11:20 AM
To: Former OpenStack Development Mailing List, use openstack-discuss now
Subject: Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers 
for security and size of images (maintenance) sakes

If the base layers are shared, you won't pay extra for the separate puppet 
container unless you have another container also installing ruby in an upper 
layer. With OpenStack, thats unlikely.

the apparent size of a container is not equal to its actual size.

Thanks,
Kevin

From: Jiří Stránský [ji...@redhat.com]
Sent: Thursday, November 29, 2018 9:42 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers 
for security and size of images (maintenance) sakes

On 28. 11. 18 18:29, Bogdan Dobrelya wrote:
> On 11/28/18 6:02 PM, Jiří Stránský wrote:
>> 
>>
>>>
>>> Reiterating again on previous points:
>>>
>>> -I'd be fine removing systemd. But lets do it properly and not via 'rpm
>>> -ev --nodeps'.
>>> -Puppet and Ruby *are* required for configuration. We can certainly put
>>> them in a separate container outside of the runtime service containers
>>> but doing so would actually cost you much more space/bandwidth for each
>>> service container. As both of these have to get downloaded to each node
>>> anyway in order to generate config files with our current mechanisms
>>> I'm not sure this buys you anything.
>>
>> +1. I was actually under the impression that we concluded yesterday on
>> IRC that this is the only thing that makes sense to seriously consider.
>> But even then it's not a win-win -- we'd gain some security by leaner
>> production images, but pay for it with space+bandwidth by duplicating
>> image content (IOW we can help achieve one of the goals we had in mind
>> by worsening the situation w/r/t the other goal we had in mind.)
>>
>> Personally i'm not sold yet but it's something that i'd consider if we
>> got measurements of how much more space/bandwidth usage this would
>> consume, and if we got some further details/examples about how serious
>> are the security concerns if we leave config mgmt tools in runtime images.
>>
>> IIRC the other options (that were brought forward so far) were already
>> dismissed in yesterday's IRC discussion and on the reviews. Bin/lib bind
>> mounting being too hacky and fragile, and nsenter not really solving the
>> problem (because it allows us to switch to having different bins/libs
>> available, but it does not allow merging the availability of bins/libs
>> from two containers into a single context).
>>
>>>
>>> We are going in circles here I think
>>
>> +1. I think too much of the discussion focuses on "why it's bad to have
>> config tools in runtime images", but IMO we all sorta agree that it
>> would be better not to have them there, if it came at no cost.
>>
>> I think to move forward, it would be interesting to know: if we do this
>> (i'll borrow Dan's drawing):
>>
>> |base container| --> |service container| --> |service container w/
>> Puppet installed|
>>
>> How much more space and bandwidth would this consume per node (e.g.
>> separately per controller, per compute). This could help with decision
>> making.
>
> As I've already evaluated in the related bug, that is:
>
> puppet-* modules and manifests ~ 16MB
> puppet with dependencies ~61MB
> dependencies of the seemingly largest a dependency, systemd ~190MB
>
> that would be an extra layer size for each of the container images to be
> downloaded/fetched into registries.

Thanks, i tried to do the math of the reduction vs. inflation in sizes
as follows. I think the crucial point here is the layering. If we do
this image layering:

|base| --> |+ service| --> |+ Puppet|

we'd drop ~267 MB from base image, but we'd be installing that to the
topmost level, per-component, right?

In my basic deployment, undercloud seems to have 17 "components" (49
containers), overcloud controller 15 components (48 containers), and
overcloud compute 4 components (7 containers). Accounting for overlaps,
the total number of "components" used seems to be 19. (By &q

Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes

2018-11-29 Thread Fox, Kevin M
If the base layers are shared, you won't pay extra for the separate puppet 
container unless you have another container also installing ruby in an upper 
layer. With OpenStack, thats unlikely.

the apparent size of a container is not equal to its actual size.

Thanks,
Kevin

From: Jiří Stránský [ji...@redhat.com]
Sent: Thursday, November 29, 2018 9:42 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers 
for security and size of images (maintenance) sakes

On 28. 11. 18 18:29, Bogdan Dobrelya wrote:
> On 11/28/18 6:02 PM, Jiří Stránský wrote:
>> 
>>
>>>
>>> Reiterating again on previous points:
>>>
>>> -I'd be fine removing systemd. But lets do it properly and not via 'rpm
>>> -ev --nodeps'.
>>> -Puppet and Ruby *are* required for configuration. We can certainly put
>>> them in a separate container outside of the runtime service containers
>>> but doing so would actually cost you much more space/bandwidth for each
>>> service container. As both of these have to get downloaded to each node
>>> anyway in order to generate config files with our current mechanisms
>>> I'm not sure this buys you anything.
>>
>> +1. I was actually under the impression that we concluded yesterday on
>> IRC that this is the only thing that makes sense to seriously consider.
>> But even then it's not a win-win -- we'd gain some security by leaner
>> production images, but pay for it with space+bandwidth by duplicating
>> image content (IOW we can help achieve one of the goals we had in mind
>> by worsening the situation w/r/t the other goal we had in mind.)
>>
>> Personally i'm not sold yet but it's something that i'd consider if we
>> got measurements of how much more space/bandwidth usage this would
>> consume, and if we got some further details/examples about how serious
>> are the security concerns if we leave config mgmt tools in runtime images.
>>
>> IIRC the other options (that were brought forward so far) were already
>> dismissed in yesterday's IRC discussion and on the reviews. Bin/lib bind
>> mounting being too hacky and fragile, and nsenter not really solving the
>> problem (because it allows us to switch to having different bins/libs
>> available, but it does not allow merging the availability of bins/libs
>> from two containers into a single context).
>>
>>>
>>> We are going in circles here I think
>>
>> +1. I think too much of the discussion focuses on "why it's bad to have
>> config tools in runtime images", but IMO we all sorta agree that it
>> would be better not to have them there, if it came at no cost.
>>
>> I think to move forward, it would be interesting to know: if we do this
>> (i'll borrow Dan's drawing):
>>
>> |base container| --> |service container| --> |service container w/
>> Puppet installed|
>>
>> How much more space and bandwidth would this consume per node (e.g.
>> separately per controller, per compute). This could help with decision
>> making.
>
> As I've already evaluated in the related bug, that is:
>
> puppet-* modules and manifests ~ 16MB
> puppet with dependencies ~61MB
> dependencies of the seemingly largest a dependency, systemd ~190MB
>
> that would be an extra layer size for each of the container images to be
> downloaded/fetched into registries.

Thanks, i tried to do the math of the reduction vs. inflation in sizes
as follows. I think the crucial point here is the layering. If we do
this image layering:

|base| --> |+ service| --> |+ Puppet|

we'd drop ~267 MB from base image, but we'd be installing that to the
topmost level, per-component, right?

In my basic deployment, undercloud seems to have 17 "components" (49
containers), overcloud controller 15 components (48 containers), and
overcloud compute 4 components (7 containers). Accounting for overlaps,
the total number of "components" used seems to be 19. (By "components"
here i mean whatever uses a different ConfigImage than other services. I
just eyeballed it but i think i'm not too far off the correct number.)

So we'd subtract 267 MB from base image and add that to 19 leaf images
used in this deployment. That means difference of +4.8 GB to the current
image sizes. My /var/lib/registry dir on undercloud with all the images
currently has 5.1 GB. We'd almost double that to 9.9 GB.

Going from 5.1 to 9.9 GB seems like a lot of extra traffic for the CDNs
(both external and e.g. internal within OpenStack Infra CI clouds).

And for internal traffic between local registry and overcloud nodes, it
gives +3.7 GB per controller and +800 MB per compute. That may not be so
critical but still feels like a considerable downside.

Another gut feeling is that this way of image layering would take longer
time to build and to run the modify-image Ansible role which we use in
CI, so that could endanger how our CI jobs fit into the time limit. We
could also probably measure this but i'm not sure if it's worth spending
the time.

All in all i'd 

Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes

2018-11-28 Thread Fox, Kevin M
Ok, so you have the workflow in place, but it sounds like the containers are 
not laid out to best use that workflow. Puppet is in the base layer. That means 
whenever puppet gets updated, all the other containers must be too. And other 
such update coupling issues.

I'm with you, that binaries should not be copied from one container to another 
though.

Thanks,
Kevin

From: Dan Prince [dpri...@redhat.com]
Sent: Wednesday, November 28, 2018 5:31 AM
To: Former OpenStack Development Mailing List, use openstack-discuss now; 
openstack-disc...@lists.openstack.org
Subject: Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers 
for security and size of images (maintenance) sakes

On Wed, 2018-11-28 at 00:31 +, Fox, Kevin M wrote:
> The pod concept allows you to have one tool per container do one
> thing and do it well.
>
> You can have a container for generating config, and another container
> for consuming it.
>
> In a Kubernetes pod, if you still wanted to do puppet,
> you could have a pod that:
> 1. had an init container that ran puppet and dumped the resulting
> config to an emptyDir volume.
> 2. had your main container pull its config from the emptyDir volume.

We have basically implemented the same workflow in TripleO today. First
we execute Puppet in an "init container" (really just an ephemeral
container that generates the config files and then goes away). Then we
bind mount those configs into the service container.

One improvement we could make (which we aren't doing yet) is to use a
data container/volume to store the config files instead of using the
host. Sharing *data* within a 'pod' (set of containers, etc.) is
certainly a valid use of container volumes.

None of this is what we are really talking about in this thread though.
Most of the suggestions and patches are about making our base
container(s) smaller in size. And the means by which the patches do
that is to share binaries/applications across containers with custom
mounts/volumes. I don't think it is a good idea at all as it violates
encapsulation of the containers in general, regardless of whether we
use pods or not.

Dan


>
> Then each container would have no dependency on each other.
>
> In full blown Kubernetes cluster you might have puppet generate a
> configmap though and ship it to your main container directly. Thats
> another matter though. I think the example pod example above is still
> usable without k8s?
>
> Thanks,
> Kevin
> 
> From: Dan Prince [dpri...@redhat.com]
> Sent: Tuesday, November 27, 2018 10:10 AM
> To: OpenStack Development Mailing List (not for usage questions);
> openstack-disc...@lists.openstack.org
> Subject: Re: [openstack-dev] [TripleO][Edge] Reduce base layer of
> containers for security and size of images (maintenance) sakes
>
> On Tue, 2018-11-27 at 16:24 +0100, Bogdan Dobrelya wrote:
> > Changing the topic to follow the subject.
> >
> > [tl;dr] it's time to rearchitect container images to stop
> > incluiding
> > config-time only (puppet et al) bits, which are not needed runtime
> > and
> > pose security issues, like CVEs, to maintain daily.
>
> I think your assertion that we need to rearchitect the config images
> to
> container the puppet bits is incorrect here.
>
> After reviewing the patches you linked to below it appears that you
> are
> proposing we use --volumes-from to bind mount application binaries
> from
> one container into another. I don't believe this is a good pattern
> for
> containers. On baremetal if we followed the same pattern it would be
> like using an /nfs share to obtain access to binaries across the
> network to optimize local storage. Now... some people do this (like
> maybe high performance computing would launch an MPI job like this)
> but
> I don't think we should consider it best practice for our containers
> in
> TripleO.
>
> Each container should container its own binaries and libraries as
> much
> as possible. And while I do think we should be using --volumes-from
> more often in TripleO it would be for sharing *data* between
> containers, not binaries.
>
>
> > Background:
> > 1) For the Distributed Compute Node edge case, there is potentially
> > tens
> > of thousands of a single-compute-node remote edge sites connected
> > over
> > WAN to a single control plane, which is having high latency, like a
> > 100ms or so, and limited bandwith. Reducing the base layer size
> > becomes
> > a decent goal there. See the security background below.
>
> The reason we put Puppet into the base layer was in fact to prevent
> it
> from being downloaded multiple times. If we were to re-architect the
> image

Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes

2018-11-27 Thread Fox, Kevin M
The pod concept allows you to have one tool per container do one thing and do 
it well.

You can have a container for generating config, and another container for 
consuming it.

In a Kubernetes pod, if you still wanted to do puppet,
you could have a pod that:
1. had an init container that ran puppet and dumped the resulting config to an 
emptyDir volume.
2. had your main container pull its config from the emptyDir volume.

Then each container would have no dependency on each other.

In full blown Kubernetes cluster you might have puppet generate a configmap 
though and ship it to your main container directly. Thats another matter 
though. I think the example pod example above is still usable without k8s?

Thanks,
Kevin

From: Dan Prince [dpri...@redhat.com]
Sent: Tuesday, November 27, 2018 10:10 AM
To: OpenStack Development Mailing List (not for usage questions); 
openstack-disc...@lists.openstack.org
Subject: Re: [openstack-dev] [TripleO][Edge] Reduce base layer of containers 
for security and size of images (maintenance) sakes

On Tue, 2018-11-27 at 16:24 +0100, Bogdan Dobrelya wrote:
> Changing the topic to follow the subject.
>
> [tl;dr] it's time to rearchitect container images to stop incluiding
> config-time only (puppet et al) bits, which are not needed runtime
> and
> pose security issues, like CVEs, to maintain daily.

I think your assertion that we need to rearchitect the config images to
container the puppet bits is incorrect here.

After reviewing the patches you linked to below it appears that you are
proposing we use --volumes-from to bind mount application binaries from
one container into another. I don't believe this is a good pattern for
containers. On baremetal if we followed the same pattern it would be
like using an /nfs share to obtain access to binaries across the
network to optimize local storage. Now... some people do this (like
maybe high performance computing would launch an MPI job like this) but
I don't think we should consider it best practice for our containers in
TripleO.

Each container should container its own binaries and libraries as much
as possible. And while I do think we should be using --volumes-from
more often in TripleO it would be for sharing *data* between
containers, not binaries.


>
> Background:
> 1) For the Distributed Compute Node edge case, there is potentially
> tens
> of thousands of a single-compute-node remote edge sites connected
> over
> WAN to a single control plane, which is having high latency, like a
> 100ms or so, and limited bandwith. Reducing the base layer size
> becomes
> a decent goal there. See the security background below.

The reason we put Puppet into the base layer was in fact to prevent it
from being downloaded multiple times. If we were to re-architect the
image layers such that the child layers all contained their own copies
of Puppet for example there would actually be a net increase in
bandwidth and disk usage. So I would argue we are already addressing
the goal of optimizing network and disk space.

Moving it out of the base layer so that you can patch it more often
without disrupting other services is a valid concern. But addressing
this concern while also preserving our definiation of a container (see
above, a container should contain all of its binaries) is going to cost
you something, namely disk and network space because Puppet would need
to be duplicated in each child container.

As Puppet is used to configure a majority of the services in TripleO
having it in the base container makes most sense. And yes, if there are
security patches for Puppet/Ruby those might result in a bunch of
containers getting pushed. But let Docker layers take care of this I
think... Don't try to solve things by constructing your own custom
mounts and volumes to work around the issue.


> 2) For a generic security (Day 2, maintenance) case, when
> puppet/ruby/systemd/name-it gets a CVE fixed, the base layer has to
> be
> updated and all layers on top - to be rebuild, and all of those
> layers,
> to be re-fetched for cloud hosts and all containers to be
> restarted...
> And all of that because of some fixes that have nothing to OpenStack.
> By
> the remote edge sites as well, remember of "tens of thousands", high
> latency and limited bandwith?..
> 3) TripleO CI updates (including puppet*) packages in containers, not
> in
> a common base layer of those. So each a CI job has to update puppet*
> and
> its dependencies - ruby/systemd as well. Reducing numbers of packages
> to
> update for each container makes sense for CI as well.
>
> Implementation related:
>
> WIP patches [0],[1] for early review, uses a config "pod" approach,
> does
> not require to maintain a two sets of config vs runtime images.
> Future
> work: a) cronie requires systemd, we'd want to fix that also off the
> base layer. b) rework to podman pods for docker-puppet.py instead of
> --volumes-from a side car container (can't be backported 

Re: [Openstack-operators] Openstack zun on centos???

2018-11-14 Thread Fox, Kevin M
kolla installs it via containers.

Thanks,
Kevin

From: Ignazio Cassano [ignaziocass...@gmail.com]
Sent: Wednesday, November 14, 2018 10:48 AM
To: Eduardo Gonzalez
Cc: OpenStack Operators
Subject: Re: [Openstack-operators] Openstack zun on centos???

Hi Edoardo,
does it mean openstack kolla installs zun using pip ?
I did not find any zun rpm package
Regards
Ignazio

Il giorno Mer 14 Nov 2018 18:38 Eduardo Gonzalez 
mailto:dabar...@gmail.com>> ha scritto:
Hi Cassano, you can use zun in centos deployed by kolla-ansible.

https://docs.openstack.org/kolla-ansible/latest/reference/zun-guide.html

Regards

El mié., 14 nov. 2018 17:11, Ignazio Cassano 
mailto:ignaziocass...@gmail.com>> escribió:
Hi All,
I'd like to know if openstack zun will be released for centos.
Reading documentation at docs.openstack.org only 
ubuntu installation is reported.
Many thanks
Ignazio
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR

2018-10-25 Thread Fox, Kevin M
Can you use a provider network to expose galera to the vm?

alternately, you could put a db out in the vm side. You don't strictly need to 
use the same db for every component. If crossing the streams is hard, maybe 
avoiding crossing at all is easier?

Thanks,
Kevin

From: Florian Engelmann [florian.engelm...@everyware.ch]
Sent: Thursday, October 25, 2018 8:37 AM
To: Fox, Kevin M; openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without 
DVR

you mean deploy octavia into an openstack project? But I will than need
to connect the octavia services with my galera DBs... so same problem.

Am 10/25/18 um 5:31 PM schrieb Fox, Kevin M:
> Would it make sense to move the control plane for this piece into the 
> cluster? (vm in a mangement tenant?)
>
> Thanks,
> Kevin
> 
> From: Florian Engelmann [florian.engelm...@everyware.ch]
> Sent: Thursday, October 25, 2018 7:39 AM
> To: openstack-operators@lists.openstack.org
> Subject: Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without 
> DVR
>
> It looks like devstack implemented some o-hm0 interface to connect the
> physical control host to a VxLAN.
> In our case there is no VxLAN at the control nodes nor is OVS.
>
> Is it a option to deploy those Octavia services needing this conenction
> to the compute or network nodes and use o-hm0?
>
> Am 10/25/18 um 10:22 AM schrieb Florian Engelmann:
>> Or could I create lb-mgmt-net as VxLAN and connect the control nodes to
>> this VxLAN? How to do something like that?
>>
>> Am 10/25/18 um 10:03 AM schrieb Florian Engelmann:
>>> Hmm - so right now I can't see any routed option because:
>>>
>>> The gateway connected to the VLAN provider networks (bond1 on the
>>> network nodes) is not able to route any traffic to my control nodes in
>>> the spine-leaf layer3 backend network.
>>>
>>> And right now there is no br-ex at all nor any "streched" L2 domain
>>> connecting all compute nodes.
>>>
>>>
>>> So the only solution I can think of right now is to create an overlay
>>> VxLAN in the spine-leaf backend network, connect all compute and
>>> control nodes to this overlay L2 network, create a OVS bridge
>>> connected to that network on the compute nodes and allow the Amphorae
>>> to get an IPin this network as well.
>>> Not to forget about DHCP... so the network nodes will need this bridge
>>> as well.
>>>
>>> Am 10/24/18 um 10:01 PM schrieb Erik McCormick:
>>>>
>>>>
>>>> On Wed, Oct 24, 2018, 3:33 PM Engelmann Florian
>>>> >>> <mailto:florian.engelm...@everyware.ch>> wrote:
>>>>
>>>>  On the network nodes we've got a dedicated interface to deploy VLANs
>>>>  (like the provider network for internet access). What about creating
>>>>  another VLAN on the network nodes, give that bridge a IP which is
>>>>  part of the subnet of lb-mgmt-net and start the octavia worker,
>>>>  healthmanager and controller on the network nodes binding to that
>>>> IP?
>>>>
>>>> The problem with that is you can't out an IP in the vlan interface
>>>> and also use it as an OVS bridge, so the Octavia processes would have
>>>> nothing to bind to.
>>>>
>>>>
>>>> 
>>>>  *From:* Erik McCormick >>>  <mailto:emccorm...@cirrusseven.com>>
>>>>  *Sent:* Wednesday, October 24, 2018 6:18 PM
>>>>  *To:* Engelmann Florian
>>>>  *Cc:* openstack-operators
>>>>  *Subject:* Re: [Openstack-operators] [octavia][rocky] Octavia and
>>>>  VxLAN without DVR
>>>>
>>>>
>>>>  On Wed, Oct 24, 2018, 12:02 PM Florian Engelmann
>>>>  >>>  <mailto:florian.engelm...@everyware.ch>> wrote:
>>>>
>>>>  Am 10/24/18 um 2:08 PM schrieb Erik McCormick:
>>>>   >
>>>>   >
>>>>   > On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann
>>>>   > >>>  <mailto:florian.engelm...@everyware.ch>
>>>>  <mailto:florian.engelm...@everyware.ch
>>>>  <mailto:florian.engelm...@everyware.ch>>>
>>>>   > wrote:
>>>>   >
>>>>   > Oho

Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without DVR

2018-10-25 Thread Fox, Kevin M
Would it make sense to move the control plane for this piece into the cluster? 
(vm in a mangement tenant?)

Thanks,
Kevin

From: Florian Engelmann [florian.engelm...@everyware.ch]
Sent: Thursday, October 25, 2018 7:39 AM
To: openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] [octavia][rocky] Octavia and VxLAN without 
DVR

It looks like devstack implemented some o-hm0 interface to connect the
physical control host to a VxLAN.
In our case there is no VxLAN at the control nodes nor is OVS.

Is it a option to deploy those Octavia services needing this conenction
to the compute or network nodes and use o-hm0?

Am 10/25/18 um 10:22 AM schrieb Florian Engelmann:
> Or could I create lb-mgmt-net as VxLAN and connect the control nodes to
> this VxLAN? How to do something like that?
>
> Am 10/25/18 um 10:03 AM schrieb Florian Engelmann:
>> Hmm - so right now I can't see any routed option because:
>>
>> The gateway connected to the VLAN provider networks (bond1 on the
>> network nodes) is not able to route any traffic to my control nodes in
>> the spine-leaf layer3 backend network.
>>
>> And right now there is no br-ex at all nor any "streched" L2 domain
>> connecting all compute nodes.
>>
>>
>> So the only solution I can think of right now is to create an overlay
>> VxLAN in the spine-leaf backend network, connect all compute and
>> control nodes to this overlay L2 network, create a OVS bridge
>> connected to that network on the compute nodes and allow the Amphorae
>> to get an IPin this network as well.
>> Not to forget about DHCP... so the network nodes will need this bridge
>> as well.
>>
>> Am 10/24/18 um 10:01 PM schrieb Erik McCormick:
>>>
>>>
>>> On Wed, Oct 24, 2018, 3:33 PM Engelmann Florian
>>> >> > wrote:
>>>
>>> On the network nodes we've got a dedicated interface to deploy VLANs
>>> (like the provider network for internet access). What about creating
>>> another VLAN on the network nodes, give that bridge a IP which is
>>> part of the subnet of lb-mgmt-net and start the octavia worker,
>>> healthmanager and controller on the network nodes binding to that
>>> IP?
>>>
>>> The problem with that is you can't out an IP in the vlan interface
>>> and also use it as an OVS bridge, so the Octavia processes would have
>>> nothing to bind to.
>>>
>>>
>>> 
>>> *From:* Erik McCormick >> >
>>> *Sent:* Wednesday, October 24, 2018 6:18 PM
>>> *To:* Engelmann Florian
>>> *Cc:* openstack-operators
>>> *Subject:* Re: [Openstack-operators] [octavia][rocky] Octavia and
>>> VxLAN without DVR
>>>
>>>
>>> On Wed, Oct 24, 2018, 12:02 PM Florian Engelmann
>>> >> > wrote:
>>>
>>> Am 10/24/18 um 2:08 PM schrieb Erik McCormick:
>>>  >
>>>  >
>>>  > On Wed, Oct 24, 2018, 3:14 AM Florian Engelmann
>>>  > >> 
>>> >> >>
>>>  > wrote:
>>>  >
>>>  > Ohoh - thank you for your empathy :)
>>>  > And those great details about how to setup this mgmt
>>> network.
>>>  > I will try to do so this afternoon but solving that
>>> routing "puzzle"
>>>  > (virtual network to control nodes) I will need our
>>> network guys to help
>>>  > me out...
>>>  >
>>>  > But I will need to tell all Amphorae a static route to
>>> the gateway that
>>>  > is routing to the control nodes?
>>>  >
>>>  >
>>>  > Just set the default gateway when you create the neutron
>>> subnet. No need
>>>  > for excess static routes. The route on the other connection
>>> won't
>>>  > interfere with it as it lives in a namespace.
>>>
>>>
>>> My compute nodes have no br-ex and there is no L2 domain spread
>>> over all
>>> compute nodes. As far as I understood lb-mgmt-net is a provider
>>> network
>>> and has to be flat or VLAN and will need a "physical" gateway
>>> (as there
>>> is no virtual router).
>>> So the question - is it possible to get octavia up and running
>>> without a
>>> br-ex (L2 domain spread over all compute nodes) on the compute
>>> nodes?
>>>
>>>
>>> Sorry, I only meant it was *like* br-ex on your network nodes. You
>>> don't need that on your computes.
>>>
>>> The router here would be whatever does routing in your physical
>>> network. Setting the gateway in the neutron subnet simply adds that
>>> to the DHCP information sent to the amphorae.
>>>
>>> This does bring up another thingI 

Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints

2018-10-19 Thread Fox, Kevin M
Adding a stateless service provider on the existing etcd key value store would 
be pretty easy with something like coredns I think without adding another 
stateful storage dependency.

I don't really have a horse in the game other then I'm an operator and we're 
feeling overwhelmed by all the state stuff to maintain.

If consul is entirely optional, its probably fine to add the feature. But I 
worry operators may avoid it.

Thanks,
Kevin

From: Florian Engelmann [florian.engelm...@everyware.ch]
Sent: Friday, October 19, 2018 1:17 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, 
fabio and FQDN endpoints

> No, I mean, Consul would be an extra dependency in a big list of dependencies 
> OpenStack already has. OpenStack has so many it is causing operators to 
> reconsider adoption. I'm asking, if existing dependencies can be made to 
> solve the problem without adding more?
>
> Stateful dependencies are much harder to deal with then stateless ones, as 
> they take much more operator care/attention. Consul is stateful as is etcd, 
> and etcd is already a dependency.
>
> Can etcd be used instead so as not to put more load on the operators?

While etcd is a strong KV store it lacks many features consul has. Using
consul for DNS based service discovery is very easy to implement without
making it a dependency.
So we will start with a "external" consul and see how to handle the
service registration without modifying the kolla containers or any
kolla-ansible code.

All the best,
Flo


>
> Thanks,
> Kevin
> 
> From: Florian Engelmann [florian.engelm...@everyware.ch]
> Sent: Wednesday, October 10, 2018 12:18 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, 
> fabio and FQDN endpoints
>
> by "another storage system" you mean the KV store of consul? That's just
> someting consul brings with it...
>
> consul is very strong in doing health checks
>
> Am 10/9/18 um 6:09 PM schrieb Fox, Kevin M:
>> etcd is an already approved openstack dependency. Could that be used instead 
>> of consul so as to not add yet another storage system? coredns with the 
>> https://coredns.io/plugins/etcd/ plugin would maybe do what you need?
>>
>> Thanks,
>> Kevin
>> 
>> From: Florian Engelmann [florian.engelm...@everyware.ch]
>> Sent: Monday, October 08, 2018 3:14 AM
>> To: openstack-dev@lists.openstack.org
>> Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, 
>> fabio and FQDN endpoints
>>
>> Hi,
>>
>> I would like to start a discussion about some changes and additions I
>> would like to see in in kolla and kolla-ansible.
>>
>> 1. Keepalived is a problem in layer3 spine leaf networks as any floating
>> IP can only exist in one leaf (and VRRP is a problem in layer3). I would
>> like to use consul and registrar to get rid of the "internal" floating
>> IP and use consuls DNS service discovery to connect all services with
>> each other.
>>
>> 2. Using "ports" for external API (endpoint) access is a major headache
>> if a firewall is involved. I would like to configure the HAProxy (or
>> fabio) for the external access to use "Host:" like, eg. "Host:
>> keystone.somedomain.tld", "Host: nova.somedomain.tld", ... with HTTPS.
>> Any customer would just need HTTPS access and not have to open all those
>> ports in his firewall. For some enterprise customers it is not possible
>> to request FW changes like that.
>>
>> 3. HAProxy is not capable to handle "read/write" split with Galera. I
>> would like to introduce ProxySQL to be able to scale Galera.
>>
>> 4. HAProxy is fine but fabio integrates well with consul, statsd and
>> could be connected to a vault cluster to manage secure certificate access.
>>
>> 5. I would like to add vault as Barbican backend.
>>
>> 6. I would like to add an option to enable tokenless authentication for
>> all services with each other to get rid of all the openstack service
>> passwords (security issue).
>>
>> What do you think about it?
>>
>> All the best,
>> Florian
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> -

Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints

2018-10-17 Thread Fox, Kevin M
No, I mean, Consul would be an extra dependency in a big list of dependencies 
OpenStack already has. OpenStack has so many it is causing operators to 
reconsider adoption. I'm asking, if existing dependencies can be made to solve 
the problem without adding more?

Stateful dependencies are much harder to deal with then stateless ones, as they 
take much more operator care/attention. Consul is stateful as is etcd, and etcd 
is already a dependency.

Can etcd be used instead so as not to put more load on the operators?

Thanks,
Kevin

From: Florian Engelmann [florian.engelm...@everyware.ch]
Sent: Wednesday, October 10, 2018 12:18 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, 
fabio and FQDN endpoints

by "another storage system" you mean the KV store of consul? That's just
someting consul brings with it...

consul is very strong in doing health checks

Am 10/9/18 um 6:09 PM schrieb Fox, Kevin M:
> etcd is an already approved openstack dependency. Could that be used instead 
> of consul so as to not add yet another storage system? coredns with the 
> https://coredns.io/plugins/etcd/ plugin would maybe do what you need?
>
> Thanks,
> Kevin
> 
> From: Florian Engelmann [florian.engelm...@everyware.ch]
> Sent: Monday, October 08, 2018 3:14 AM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, 
> fabio and FQDN endpoints
>
> Hi,
>
> I would like to start a discussion about some changes and additions I
> would like to see in in kolla and kolla-ansible.
>
> 1. Keepalived is a problem in layer3 spine leaf networks as any floating
> IP can only exist in one leaf (and VRRP is a problem in layer3). I would
> like to use consul and registrar to get rid of the "internal" floating
> IP and use consuls DNS service discovery to connect all services with
> each other.
>
> 2. Using "ports" for external API (endpoint) access is a major headache
> if a firewall is involved. I would like to configure the HAProxy (or
> fabio) for the external access to use "Host:" like, eg. "Host:
> keystone.somedomain.tld", "Host: nova.somedomain.tld", ... with HTTPS.
> Any customer would just need HTTPS access and not have to open all those
> ports in his firewall. For some enterprise customers it is not possible
> to request FW changes like that.
>
> 3. HAProxy is not capable to handle "read/write" split with Galera. I
> would like to introduce ProxySQL to be able to scale Galera.
>
> 4. HAProxy is fine but fabio integrates well with consul, statsd and
> could be connected to a vault cluster to manage secure certificate access.
>
> 5. I would like to add vault as Barbican backend.
>
> 6. I would like to add an option to enable tokenless authentication for
> all services with each other to get rid of all the openstack service
> passwords (security issue).
>
> What do you think about it?
>
> All the best,
> Florian
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

--

EveryWare AG
Florian Engelmann
Systems Engineer
Zurlindenstrasse 52a
CH-8003 Zürich

tel: +41 44 466 60 00
fax: +41 44 466 60 10
mail: mailto:florian.engelm...@everyware.ch
web: http://www.everyware.ch

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][tc] add service discovery, proxysql, vault, fabio and FQDN endpoints

2018-10-11 Thread Fox, Kevin M
My understanding is it is still safeish to use when you deal with it right. it 
causes a transaction abort if the race condition ever hits, and you can keep 
retrying until your commit makes it. So, there are two issues here:
1. its a more rare kind of abort, so unless you are testing and retrying, it 
can cause operations to fail in a way the user might notice needlessly. This is 
bad. It should be tested for in the gate.
2. in highly contended systems, it can be a performance issue. This is less bad 
then #1. For certain codes, it may never be a problem.

Thanks,
Kevin

From: Zane Bitter [zbit...@redhat.com]
Sent: Thursday, October 11, 2018 10:08 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [kolla][tc] add service discovery, proxysql, 
vault, fabio and FQDN endpoints

On 10/10/18 1:35 PM, Jay Pipes wrote:
> +tc topic
>
> On 10/10/2018 11:49 AM, Fox, Kevin M wrote:
>> Sorry. Couldn't quite think of the name. I was meaning, openstack
>> project tags.
>
> I think having a tag that indicates the project is no longer using
> SELECT FOR UPDATE (and thus is safe to use multi-writer Galera) is an
> excellent idea, Kevin. ++

I would support such a tag, especially if it came with detailed
instructions on how to audit your code to make sure you are not doing
this with sqlalchemy. (Bonus points for a flake8 plugin that can be
enabled in the gate.)

(One question for clarification: is this actually _required_ to use
multi-writer Galera? My previous recollection was that it was possible,
but inefficient, to use SELECT FOR UPDATE safely as long as you wrote a
lot of boilerplate to restart the transaction if it failed.)

> -jay
>
>> 
>> From: Jay Pipes [jaypi...@gmail.com]
>> Sent: Tuesday, October 09, 2018 12:22 PM
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [kolla] add service discovery, proxysql,
>> vault, fabio and FQDN endpoints
>>
>> On 10/09/2018 03:10 PM, Fox, Kevin M wrote:
>>> Oh, this does raise an interesting question... Should such
>>> information be reported by the projects up to users through labels?
>>> Something like, "percona_multimaster=safe" Its really difficult for
>>> folks to know which projects can and can not be used that way currently.
>>
>> Are you referring to k8s labels/selectors? or are you referring to
>> project tags (you know, part of that whole Big Tent thing...)?
>>
>> -jay
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>>
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints

2018-10-10 Thread Fox, Kevin M
Sorry. Couldn't quite think of the name. I was meaning, openstack project tags.

Thanks,
Kevin

From: Jay Pipes [jaypi...@gmail.com]
Sent: Tuesday, October 09, 2018 12:22 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, 
fabio and FQDN endpoints

On 10/09/2018 03:10 PM, Fox, Kevin M wrote:
> Oh, this does raise an interesting question... Should such information be 
> reported by the projects up to users through labels? Something like, 
> "percona_multimaster=safe" Its really difficult for folks to know which 
> projects can and can not be used that way currently.

Are you referring to k8s labels/selectors? or are you referring to
project tags (you know, part of that whole Big Tent thing...)?

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints

2018-10-09 Thread Fox, Kevin M
Oh, this does raise an interesting question... Should such information be 
reported by the projects up to users through labels? Something like, 
"percona_multimaster=safe" Its really difficult for folks to know which 
projects can and can not be used that way currently.

Is this a TC question?

Thanks,
Kevin

From: melanie witt [melwi...@gmail.com]
Sent: Tuesday, October 09, 2018 10:35 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, 
fabio and FQDN endpoints

On Tue, 9 Oct 2018 07:23:03 -0400, Jay Pipes wrote:
> That explains where the source of the problem comes from (it's the use
> of SELECT FOR UPDATE, which has been removed from Nova's quota-handling
> code in the Rocky release).

Small correction, the SELECT FOR UPDATE was removed from Nova's
quota-handling code in the Pike release.

-melanie




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints

2018-10-09 Thread Fox, Kevin M
etcd is an already approved openstack dependency. Could that be used instead of 
consul so as to not add yet another storage system? coredns with the 
https://coredns.io/plugins/etcd/ plugin would maybe do what you need?

Thanks,
Kevin

From: Florian Engelmann [florian.engelm...@everyware.ch]
Sent: Monday, October 08, 2018 3:14 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio 
and FQDN endpoints

Hi,

I would like to start a discussion about some changes and additions I
would like to see in in kolla and kolla-ansible.

1. Keepalived is a problem in layer3 spine leaf networks as any floating
IP can only exist in one leaf (and VRRP is a problem in layer3). I would
like to use consul and registrar to get rid of the "internal" floating
IP and use consuls DNS service discovery to connect all services with
each other.

2. Using "ports" for external API (endpoint) access is a major headache
if a firewall is involved. I would like to configure the HAProxy (or
fabio) for the external access to use "Host:" like, eg. "Host:
keystone.somedomain.tld", "Host: nova.somedomain.tld", ... with HTTPS.
Any customer would just need HTTPS access and not have to open all those
ports in his firewall. For some enterprise customers it is not possible
to request FW changes like that.

3. HAProxy is not capable to handle "read/write" split with Galera. I
would like to introduce ProxySQL to be able to scale Galera.

4. HAProxy is fine but fabio integrates well with consul, statsd and
could be connected to a vault cluster to manage secure certificate access.

5. I would like to add vault as Barbican backend.

6. I would like to add an option to enable tokenless authentication for
all services with each other to get rid of all the openstack service
passwords (security issue).

What do you think about it?

All the best,
Florian

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, fabio and FQDN endpoints

2018-10-09 Thread Fox, Kevin M
There are specific cases where it expects the client to retry and not all code 
tests for that case. Its safe funneling all traffic to one server. It can be 
unsafe to do so otherwise.

Thanks,
Kevin

From: Jay Pipes [jaypi...@gmail.com]
Sent: Monday, October 08, 2018 10:48 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [kolla] add service discovery, proxysql, vault, 
fabio and FQDN endpoints

On 10/08/2018 06:14 AM, Florian Engelmann wrote:
> 3. HAProxy is not capable to handle "read/write" split with Galera. I
> would like to introduce ProxySQL to be able to scale Galera.

Why not send all read and all write traffic to a single haproxy endpoint
and just have haproxy spread all traffic across each Galera node?

Galera, after all, is multi-master synchronous replication... so it
shouldn't matter which node in the Galera cluster you send traffic to.

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-27 Thread Fox, Kevin M
Its the commons problem again. Either we encourage folks to contribute a little 
bit to the commons (review a few other peoples noncompute cli thingies. in 
doing so, you learn how to better do the cli in the generic/user friendly 
ways), to further their own project goals (get easier access to contribute to 
the cli of the compute stuff), or we do what we've always done. Let each 
project maintain their own cli and have no uniformity at all. Why are the walls 
in OpenStack so high?

Kevin

From: Matt Riedemann [mriede...@gmail.com]
Sent: Thursday, September 27, 2018 12:35 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting 
goal selection for T series

On 9/27/2018 2:33 PM, Fox, Kevin M wrote:
> If the project plugins were maintained by the OSC project still, maybe there 
> would be incentive for the various other projects to join the OSC project, 
> scaling things up?

Sure, I don't really care about governance. But I also don't really care
about all of the non-compute API things in OSC either.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-27 Thread Fox, Kevin M
If the project plugins were maintained by the OSC project still, maybe there 
would be incentive for the various other projects to join the OSC project, 
scaling things up?

Thanks,
Kevin

From: Matt Riedemann [mriede...@gmail.com]
Sent: Thursday, September 27, 2018 12:12 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [Openstack-sigs] [goals][tc][ptl][uc] starting 
goal selection for T series

On 9/27/2018 10:13 AM, Dean Troyer wrote:
> On Thu, Sep 27, 2018 at 9:10 AM, Doug Hellmann  wrote:
>> Monty Taylor  writes:
>>> Main difference is making sure these new deconstructed plugin teams
>>> understand the client support lifecycle - which is that we don't drop
>>> support for old versions of services in OSC (or SDK). It's a shift from
>>> the support lifecycle and POV of python-*client, but it's important and
>>> we just need to all be on the same page.
>> That sounds like a reason to keep the governance of the libraries under
>> the client tool project.
> Hmmm... I think that may address a big chunk of my reservations about
> being able to maintain consistency and user experience in a fully
> split-OSC world.
>
> dt

My biggest worry with splitting everything out into plugins with new
core teams, even with python-openstackclient-core as a superset, is that
those core teams will all start approving things that don't fit with the
overall guidelines of how OSC commands should be written. I've had to go
to the "Dean well" several times when reviewing osc-placement commands.

But the python-openstackclient-core team probably isn't going to scale
to fit the need of all of these gaps that need closing from the various
teams, either. So how does that get fixed? I've told Dean and Steve
before that if they want me to review / ack something compute-specific
in OSC that they can call on me, like a liaison. Maybe that's all we
need to start? Because I've definitely disagreed with compute CLI
changes in OSC that have a +2 from the core team because of a lack of
understanding from both the contributor and the reviewers about what the
compute API actually does, or how a microversion behaves. Or maybe we
just do some kind of subteam thing where OSC core doesn't look at a
change until the subteam has +1ed it. We have a similar concept in nova
with virt driver subteams.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-26 Thread Fox, Kevin M
+1 :)

From: Tim Bell [tim.b...@cern.ch]
Sent: Wednesday, September 26, 2018 11:55 AM
To: OpenStack Development Mailing List (not for usage questions); 
openstack-operators; openstack-sigs
Subject: Re: [Openstack-sigs] [openstack-dev] [goals][tc][ptl][uc] starting 
goal selection for T series

Doug,

Thanks for raising this. I'd like to highlight the goal "Finish moving legacy 
python-*client CLIs to python-openstackclient" from the etherpad and propose 
this for a T/U series goal.

To give it some context and the motivation:

At CERN, we have more than 3000 users of the OpenStack cloud. We write an 
extensive end user facing documentation which explains how to use the OpenStack 
along with CERN specific features (such as workflows for requesting 
projects/quotas/etc.).

One regular problem we come across is that the end user experience is 
inconsistent. In some cases, we find projects which are not covered by the 
unified OpenStack client (e.g. Manila). In other cases, there are subsets of 
the function which require the native project client.

I would strongly support a goal which targets

- All new projects should have the end user facing functionality fully exposed 
via the unified client
- Existing projects should aim to close the gap within 'N' cycles (N to be 
defined)
- Many administrator actions would also benefit from integration (reader roles 
are end users too so list and show need to be covered too)
- Users should be able to use a single openrc for all interactions with the 
cloud (e.g. not switch between password for some CLIs and Kerberos for OSC)

The end user perception of a solution will be greatly enhanced by a single 
command line tool with consistent syntax and authentication framework.

It may be a multi-release goal but it would really benefit the cloud consumers 
and I feel that goals should include this audience also.

Tim

-Original Message-
From: Doug Hellmann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 26 September 2018 at 18:00
To: openstack-dev , openstack-operators 
, openstack-sigs 

Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T 
series

It's time to start thinking about community-wide goals for the T series.

We use community-wide goals to achieve visible common changes, push for
basic levels of consistency and user experience, and efficiently improve
certain areas where technical debt payments have become too high -
across all OpenStack projects. Community input is important to ensure
that the TC makes good decisions about the goals. We need to consider
the timing, cycle length, priority, and feasibility of the suggested
goals.

If you are interested in proposing a goal, please make sure that before
the summit it is described in the tracking etherpad [1] and that you
have started a mailing list thread on the openstack-dev list about the
proposal so that everyone in the forum session [2] has an opportunity to
consider the details.  The forum session is only one step in the
selection process. See [3] for more details.

Doug

[1] https://etherpad.openstack.org/p/community-goals
[2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814
[3] https://governance.openstack.org/tc/goals/index.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
openstack-sigs mailing list
openstack-s...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T series

2018-09-26 Thread Fox, Kevin M
+1 :)

From: Tim Bell [tim.b...@cern.ch]
Sent: Wednesday, September 26, 2018 11:55 AM
To: OpenStack Development Mailing List (not for usage questions); 
openstack-operators; openstack-sigs
Subject: Re: [Openstack-sigs] [openstack-dev] [goals][tc][ptl][uc] starting 
goal selection for T series

Doug,

Thanks for raising this. I'd like to highlight the goal "Finish moving legacy 
python-*client CLIs to python-openstackclient" from the etherpad and propose 
this for a T/U series goal.

To give it some context and the motivation:

At CERN, we have more than 3000 users of the OpenStack cloud. We write an 
extensive end user facing documentation which explains how to use the OpenStack 
along with CERN specific features (such as workflows for requesting 
projects/quotas/etc.).

One regular problem we come across is that the end user experience is 
inconsistent. In some cases, we find projects which are not covered by the 
unified OpenStack client (e.g. Manila). In other cases, there are subsets of 
the function which require the native project client.

I would strongly support a goal which targets

- All new projects should have the end user facing functionality fully exposed 
via the unified client
- Existing projects should aim to close the gap within 'N' cycles (N to be 
defined)
- Many administrator actions would also benefit from integration (reader roles 
are end users too so list and show need to be covered too)
- Users should be able to use a single openrc for all interactions with the 
cloud (e.g. not switch between password for some CLIs and Kerberos for OSC)

The end user perception of a solution will be greatly enhanced by a single 
command line tool with consistent syntax and authentication framework.

It may be a multi-release goal but it would really benefit the cloud consumers 
and I feel that goals should include this audience also.

Tim

-Original Message-
From: Doug Hellmann 
Reply-To: "OpenStack Development Mailing List (not for usage questions)" 

Date: Wednesday, 26 September 2018 at 18:00
To: openstack-dev , openstack-operators 
, openstack-sigs 

Subject: [openstack-dev] [goals][tc][ptl][uc] starting goal selection for T 
series

It's time to start thinking about community-wide goals for the T series.

We use community-wide goals to achieve visible common changes, push for
basic levels of consistency and user experience, and efficiently improve
certain areas where technical debt payments have become too high -
across all OpenStack projects. Community input is important to ensure
that the TC makes good decisions about the goals. We need to consider
the timing, cycle length, priority, and feasibility of the suggested
goals.

If you are interested in proposing a goal, please make sure that before
the summit it is described in the tracking etherpad [1] and that you
have started a mailing list thread on the openstack-dev list about the
proposal so that everyone in the forum session [2] has an opportunity to
consider the details.  The forum session is only one step in the
selection process. See [3] for more details.

Doug

[1] https://etherpad.openstack.org/p/community-goals
[2] https://www.openstack.org/summit/berlin-2018/vote-for-speakers#/22814
[3] https://governance.openstack.org/tc/goals/index.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
openstack-sigs mailing list
openstack-s...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [Openstack-sigs] [openstack-dev] Open letter/request to TC candidates (and existing elected officials)

2018-09-13 Thread Fox, Kevin M
How about stated this way,
Its the tc's responsibility to get it done. Either by delegating the activity, 
or by doing it themselves. But either way, it needs to get done. Its a ball 
that has been dropped too much in OpenStacks history. If no one is ultimately 
responsible, balls will keep getting dropped.

Thanks,
Kevin

From: Matt Riedemann [mriede...@gmail.com]
Sent: Wednesday, September 12, 2018 4:00 PM
To: Dan Smith; Thierry Carrez
Cc: OpenStack Development Mailing List (not for usage questions); 
openstack-s...@lists.openstack.org; openstack-operators@lists.openstack.org
Subject: Re: [Openstack-sigs] [openstack-dev] Open letter/request to TC 
candidates (and existing elected officials)

On 9/12/2018 3:30 PM, Dan Smith wrote:
>> I'm just a bit worried to limit that role to the elected TC members. If
>> we say "it's the role of the TC to do cross-project PM in OpenStack"
>> then we artificially limit the number of people who would sign up to do
>> that kind of work. You mention Ildiko and Lance: they did that line of
>> work without being elected.
> Why would saying that we_expect_  the TC members to do that work limit
> such activities only to those that are on the TC? I would expect the TC
> to take on the less-fun or often-neglected efforts that we all know are
> needed but don't have an obvious champion or sponsor.
>
> I think we expect some amount of widely-focused technical or project
> leadership from TC members, and certainly that expectation doesn't
> prevent others from leading efforts (even in the areas of proposing TC
> resolutions, etc) right?

Absolutely. I'm not saying the cross-project project management should
be restricted to or solely the responsibility of the TC. It's obvious
there are people outside of the TC that have already been doing this -
and no it's not always elected PTLs either. What I want is elected TC
members to prioritize driving technical deliverables to completion based
on ranked input from operators/users/SIGs over philosophical debates and
politics/bureaucracy and help to complete the technical tasks if possible.

--

Thanks,

Matt

___
openstack-sigs mailing list
openstack-s...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [ironic] proposing metalsmith for inclusion into ironic governance

2018-08-28 Thread Fox, Kevin M
Might be a good option to plug in to the kubernetes cluster api 
https://github.com/kubernetes-sigs/cluster-api too.

Thanks,
Kevin

From: Mark Goddard [m...@stackhpc.com]
Sent: Tuesday, August 28, 2018 10:55 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [ironic] proposing metalsmith for inclusion into 
ironic governance

+1. I like it. Could also be a good fit for Kayobe's undercloud equivalent at 
some point.

On Tue, 28 Aug 2018 at 18:51, Jim Rollenhagen 
mailto:j...@jimrollenhagen.com>> wrote:
On Mon, Aug 27, 2018 at 12:09 PM, Dmitry Tantsur 
mailto:dtant...@redhat.com>> wrote:
Hi all,

I would like propose the metalsmith library [1][2] for inclusion into the bare 
metal project governance.

What it is and is not
-

Metalsmith is a library and CLI tool for using Ironic+Neutron for provisioning 
bare metal nodes. It can be seen as a lightweight replacement of Nova when Nova 
is too much. The primary use case is single-tenant standalone installer.

Metalsmith is not a new service, it does not maintain any state, except for 
state maintained by Ironic and Neutron. Metalsmith is not and will not be a 
replacement for Nova in any proper cloud scenario.

Metalsmith does have some overlap with Bifrost, with one important feature 
difference: its primary feature is a mini-scheduler that allows to pick a 
suitable bare metal node for deployment.

I have a partial convergence plan as well! First, as part of this effort I'm 
working on missing features in openstacksdk, which is used in the OpenStack 
ansible modules, which are used in Bifrost. Second, I hope we can use it as a 
helper for making Bifrost do scheduling decisions.

Background
--

Metalsmith was born with the goal of replacing Nova in TripleO undercloud. 
Indeed, the undercloud uses only a small subset of Nova features, while having 
features that conflict with Nova's design (for example, bypassing the scheduler 
[3]).

We wanted to avoid putting a lot of provisioning logic into existing TripleO 
components. So I wrote a library that does not carry any TripleO-specific 
assumptions, but does allow to address its needs.

Why under Ironic


I believe the goal of Metalsmith is fully aligned with what the Ironic team is 
doing around standalone deployment. I think Metalsmith can provide a nice entry 
point into standalone deployments for people who (for any reasons) will not use 
Bifrost. With this change I hope to get more exposure for it.

The library itself is small, documented [2], follows OpenStack practices and 
does not have particular operating requirements. There is nothing in it that is 
not familiar to the Ironic team members.

I agree with all of this, after reading the code/docs. +1 from me.

// jim


Please let me know if you have any questions or concerns.

Dmitry


[1] https://github.com/openstack/metalsmith
[2] https://metalsmith.readthedocs.io/en/latest/
[3] http://tripleo.org/install/advanced_deployment/node_placement.html

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls

2018-08-27 Thread Fox, Kevin M
I think in this context, kubelet without all of kubernetes still has the value 
that it provides an abstraction layer that podmon/paunch is being suggested to 
handle.

It does not need the things you mention, network, sidecar, scaleup/down, etc. 
You can use as little as you want.

For example, make a pod yaml per container with hostNetwork: true. it will run 
just like it was on the host then. You can do just one container. no sidecars 
necessary. Without the apiserver, it can't do scaleup/down even if you wanted 
to.

It provides declarative yaml based management of containers, similar to paunch. 
so you can skip needing that component.

It also already provides crio and docker support via cri.

It does provide a little bit of orchestration, in that you drive things with 
declarative yaml. You drop in a yaml file in /etc/kubernetes/manifests, and it 
will create the container. you delete it, it removes the container. If you 
change it, it will update the container. and if something goes wrong with the 
container, it will try and get it back to the requested state automatically. 
And, it will recover the containers on reboot without help.

Thanks,
Kevin


From: Sergii Golovatiuk [sgolo...@redhat.com]
Sent: Monday, August 27, 2018 3:46 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for nice   
API calls

Hi,

On Mon, Aug 27, 2018 at 12:16 PM, Rabi Mishra  wrote:
> On Mon, Aug 27, 2018 at 3:25 PM, Sergii Golovatiuk 
> wrote:
>>
>> Hi,
>>
>> On Mon, Aug 27, 2018 at 5:32 AM, Rabi Mishra  wrote:
>> > On Mon, Aug 27, 2018 at 7:31 AM, Steve Baker  wrote:
>> Steve mentioned kubectl (kubernetes CLI which communicates with
>
>
> Not sure what he meant. May be I miss something, but not heard of 'kubectl
> standalone', though he might have meant standalone k8s cluster on every node
> as you think.
>
>>
>> kube-api) not kubelet which is only one component of kubernetes. All
>> kubernetes components may be compiled as one binary (hyperkube) which
>> can be used to minimize footprint. Generated ansible for kubelet is
>> not enough as kubelet doesn't have any orchestration logic.
>
>
> What orchestration logic do we've with TripleO atm? AFAIK we've provide
> roles data for service placement across nodes, right?
> I see standalone kubelet as a first step for scheduling openstack services
> with in k8s cluster in the future (may be).

It's half measure. I don't see any advantages of that move. We should
either adopt whole kubernetes or doesn't use its components at all as
the maintenance cost will be expensive. Using kubelet requires to
resolve networking communication, scale-up/down, sidecar, or inter
services dependencies.

>
>> >>
>> >> This was a while ago now so this could be worth revisiting in the
>> >> future.
>> >> We'll be making gradual changes, the first of which is using podman to
>> >> manage single containers. However podman has native support for the pod
>> >> format, so I'm hoping we can switch to that once this transition is
>> >> complete. Then evaluating kubectl becomes much easier.
>> >>
>> >>> Question. Rather then writing a middle layer to abstract both
>> >>> container
>> >>> engines, couldn't you just use CRI? CRI is CRI-O's native language,
>> >>> and
>> >>> there is support already for Docker as well.
>> >>
>> >>
>> >> We're not writing a middle layer, we're leveraging one which is already
>> >> there.
>> >>
>> >> CRI-O is a socket interface and podman is a CLI interface that both sit
>> >> on
>> >> top of the exact same Go libraries. At this point, switching to podman
>> >> needs
>> >> a much lower development effort because we're replacing docker CLI
>> >> calls.
>> >>
>> > I see good value in evaluating kubelet standalone and leveraging it's
>> > inbuilt grpc interfaces with cri-o (rather than using podman) as a long
>> > term
>> > strategy, unless we just want to provide an alternative to docker
>> > container
>> > runtime with cri-o.
>>
>> I see no value using kubelet without kubernetes IMHO.
>>
>>
>> >
>> >>>
>> >>>
>> >>> Thanks,
>> >>> Kevin
>> >>> 
>> >>> From: Jay Pipes [jaypi...@gmail.com]
>> >>> Sent: Thursday, August 23, 2018 8:36 AM
>> >>> To: openstack-dev@lists.openstack.org
>> >>> Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for
>> >>> nice
>> >>> API calls
>> >>>
>> >>> Dan, thanks for the details and answers. Appreciated.
>> >>>
>> >>> Best,
>> >>> -jay
>> >>>
>> >>> On 08/23/2018 10:50 AM, Dan Prince wrote:
>> 
>>  On Wed, Aug 15, 2018 at 5:49 PM Jay Pipes  wrote:
>> >
>> > On 08/15/2018 04:01 PM, Emilien Macchi wrote:
>> >>
>> >> On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi > >> > wrote:
>> >>
>> >>   More seriously here: there is an ongoing effort to converge
>> >> the
>> >>   tools around containerization 

Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls

2018-08-23 Thread Fox, Kevin M
Or use kubelet in standalone mode. It can be configured for either Cri-o or 
Docker. You can drive the static manifests from heat/ansible per host as normal 
and it would be a step in the greater direction of getting to Kubernetes 
without needing the whole thing at once, if that is the goal.

Thanks,
Kevin

From: Fox, Kevin M [kevin@pnnl.gov]
Sent: Thursday, August 23, 2018 9:22 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for nice API 
calls

Question. Rather then writing a middle layer to abstract both container 
engines, couldn't you just use CRI? CRI is CRI-O's native language, and there 
is support already for Docker as well.

Thanks,
Kevin

From: Jay Pipes [jaypi...@gmail.com]
Sent: Thursday, August 23, 2018 8:36 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for nice API 
calls

Dan, thanks for the details and answers. Appreciated.

Best,
-jay

On 08/23/2018 10:50 AM, Dan Prince wrote:
> On Wed, Aug 15, 2018 at 5:49 PM Jay Pipes  wrote:
>>
>> On 08/15/2018 04:01 PM, Emilien Macchi wrote:
>>> On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi >> <mailto:emil...@redhat.com>> wrote:
>>>
>>>  More seriously here: there is an ongoing effort to converge the
>>>  tools around containerization within Red Hat, and we, TripleO are
>>>  interested to continue the containerization of our services (which
>>>  was initially done with Docker & Docker-Distribution).
>>>  We're looking at how these containers could be managed by k8s one
>>>  day but way before that we plan to swap out Docker and join CRI-O
>>>  efforts, which seem to be using Podman + Buildah (among other things).
>>>
>>> I guess my wording wasn't the best but Alex explained way better here:
>>> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52
>>>
>>> If I may have a chance to rephrase, I guess our current intention is to
>>> continue our containerization and investigate how we can improve our
>>> tooling to better orchestrate the containers.
>>> We have a nice interface (openstack/paunch) that allows us to run
>>> multiple container backends, and we're currently looking outside of
>>> Docker to see how we could solve our current challenges with the new tools.
>>> We're looking at CRI-O because it happens to be a project with a great
>>> community, focusing on some problems that we, TripleO have been facing
>>> since we containerized our services.
>>>
>>> We're doing all of this in the open, so feel free to ask any question.
>>
>> I appreciate your response, Emilien, thank you. Alex' responses to
>> Jeremy on the #openstack-tc channel were informative, thank you Alex.
>>
>> For now, it *seems* to me that all of the chosen tooling is very Red Hat
>> centric. Which makes sense to me, considering Triple-O is a Red Hat product.
>
> Perhaps a slight clarification here is needed. "Director" is a Red Hat
> product. TripleO is an upstream project that is now largely driven by
> Red Hat and is today marked as single vendor. We welcome others to
> contribute to the project upstream just like anybody else.
>
> And for those who don't know the history the TripleO project was once
> multi-vendor as well. So a lot of the abstractions we have in place
> could easily be extended to support distro specific implementation
> details. (Kind of what I view podman as in the scope of this thread).
>
>>
>> I don't know how much of the current reinvention of container runtimes
>> and various tooling around containers is the result of politics. I don't
>> know how much is the result of certain companies wanting to "own" the
>> container stack from top to bottom. Or how much is a result of technical
>> disagreements that simply cannot (or will not) be resolved among
>> contributors in the container development ecosystem.
>>
>> Or is it some combination of the above? I don't know.
>>
>> What I *do* know is that the current "NIH du jour" mentality currently
>> playing itself out in the container ecosystem -- reminding me very much
>> of the Javascript ecosystem -- makes it difficult for any potential
>> *consumers* of container libraries, runtimes or applications to be
>> confident that any choice they make towards one of the other will be the
>> *right* choice or even a *possible* choice next year -- or next week.
&g

Re: [openstack-dev] [TripleO] podman: varlink interface for nice API calls

2018-08-23 Thread Fox, Kevin M
Question. Rather then writing a middle layer to abstract both container 
engines, couldn't you just use CRI? CRI is CRI-O's native language, and there 
is support already for Docker as well.

Thanks,
Kevin

From: Jay Pipes [jaypi...@gmail.com]
Sent: Thursday, August 23, 2018 8:36 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [TripleO] podman: varlink interface for nice API 
calls

Dan, thanks for the details and answers. Appreciated.

Best,
-jay

On 08/23/2018 10:50 AM, Dan Prince wrote:
> On Wed, Aug 15, 2018 at 5:49 PM Jay Pipes  wrote:
>>
>> On 08/15/2018 04:01 PM, Emilien Macchi wrote:
>>> On Wed, Aug 15, 2018 at 5:31 PM Emilien Macchi >> > wrote:
>>>
>>>  More seriously here: there is an ongoing effort to converge the
>>>  tools around containerization within Red Hat, and we, TripleO are
>>>  interested to continue the containerization of our services (which
>>>  was initially done with Docker & Docker-Distribution).
>>>  We're looking at how these containers could be managed by k8s one
>>>  day but way before that we plan to swap out Docker and join CRI-O
>>>  efforts, which seem to be using Podman + Buildah (among other things).
>>>
>>> I guess my wording wasn't the best but Alex explained way better here:
>>> http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-08-15.log.html#t2018-08-15T17:56:52
>>>
>>> If I may have a chance to rephrase, I guess our current intention is to
>>> continue our containerization and investigate how we can improve our
>>> tooling to better orchestrate the containers.
>>> We have a nice interface (openstack/paunch) that allows us to run
>>> multiple container backends, and we're currently looking outside of
>>> Docker to see how we could solve our current challenges with the new tools.
>>> We're looking at CRI-O because it happens to be a project with a great
>>> community, focusing on some problems that we, TripleO have been facing
>>> since we containerized our services.
>>>
>>> We're doing all of this in the open, so feel free to ask any question.
>>
>> I appreciate your response, Emilien, thank you. Alex' responses to
>> Jeremy on the #openstack-tc channel were informative, thank you Alex.
>>
>> For now, it *seems* to me that all of the chosen tooling is very Red Hat
>> centric. Which makes sense to me, considering Triple-O is a Red Hat product.
>
> Perhaps a slight clarification here is needed. "Director" is a Red Hat
> product. TripleO is an upstream project that is now largely driven by
> Red Hat and is today marked as single vendor. We welcome others to
> contribute to the project upstream just like anybody else.
>
> And for those who don't know the history the TripleO project was once
> multi-vendor as well. So a lot of the abstractions we have in place
> could easily be extended to support distro specific implementation
> details. (Kind of what I view podman as in the scope of this thread).
>
>>
>> I don't know how much of the current reinvention of container runtimes
>> and various tooling around containers is the result of politics. I don't
>> know how much is the result of certain companies wanting to "own" the
>> container stack from top to bottom. Or how much is a result of technical
>> disagreements that simply cannot (or will not) be resolved among
>> contributors in the container development ecosystem.
>>
>> Or is it some combination of the above? I don't know.
>>
>> What I *do* know is that the current "NIH du jour" mentality currently
>> playing itself out in the container ecosystem -- reminding me very much
>> of the Javascript ecosystem -- makes it difficult for any potential
>> *consumers* of container libraries, runtimes or applications to be
>> confident that any choice they make towards one of the other will be the
>> *right* choice or even a *possible* choice next year -- or next week.
>> Perhaps this is why things like openstack/paunch exist -- to give you
>> options if something doesn't pan out.
>
> This is exactly why paunch exists.
>
> Re, the podman thing I look at it as an implementation detail. The
> good news is that given it is almost a parity replacement for what we
> already use we'll still contribute to the OpenStack community in
> similar ways. Ultimately whether you run 'docker run' or 'podman run'
> you end up with the same thing as far as the existing TripleO
> architecture goes.
>
> Dan
>
>>
>> You have a tough job. I wish you all the luck in the world in making
>> these decisions and hope politics and internal corporate management
>> decisions play as little a role in them as possible.
>>
>> Best,
>> -jay
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 

Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?

2018-08-21 Thread Fox, Kevin M
There have been plenty of cross project goals set forth from the TC and 
implemented by the various projects such as wsgi or python3. Those have been 
worked on by each of the projects in priority to some project specific goals by 
devs interested in bettering OpenStack. Why is it so hard to believe if the TC 
gave out a request for a grander user/ops supporting feature, that the 
community wouldn't step up? PTL's are supposed to be neutral to vendor specific 
issues and work for the betterment of the Project.

I don't buy the complexity argument either. Other non OpenStack projects are 
implementing similar functionality with far less complexity. I attribute a lot 
of that to difference in governence. Through governence we've made hard things 
much harder. They can't be fixed until the governence issues are fixed first I 
think.

Thanks,
Kevin

From: Jeremy Stanley [fu...@yuggoth.org]
Sent: Tuesday, August 21, 2018 4:10 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [nova] [placement] placement below or beside 
compute after extraction?

On 2018-08-21 22:42:45 + (+), Fox, Kevin M wrote:
[...]
> Yes, I realize shared storage was Cinders priority and Nova's now
> way behind in implementing it. so it is kind of a priority to get
> it done. Just using it as an example though in the bigger context.
>
> Having operators approach individual projects stating their needs,
> and then having the individual projects fight it out for
> priorities isn't a good plan. The priorities should be prioritized
> at a higher level then projects so the operators/users needs can
> be seen in a global light, not just filtered though each projects
> views of things.
>
> Yes, some folks volunteer to work on the things that they want to
> work on. Thats great. But some folks volunteer to work on
> priorities to help users/operators in general. Getting clear,
> unbiased priorities for them is really important.
[...]

I remain unconvinced that if someone (the TC, the OSF board, the now
defunct product management nee hidden influencers working group)
declared for example that secrets management was a higher priority
than shared storage, that any significant number of people who could
work on the latter would work on the former instead.

The TC has enough trouble getting developers in different projects
to cooperate on more than a couple of prioritized cross-project
goals per cycle. The OSF board has repeatedly shown its members are
rarely in the positions within member companies that they have any
influence over what upstream features/projects those companies work
on. The product management working group thought they had that
influence over the companies in which they worked, but were
similarly unable to find developers who could make progress toward
their identified goals.

OpenStack is an extremely complex (arguably too complex) combination
of software, for which there are necessarily people with very strong
understanding of very specific pieces and at best a loose
understanding of the whole. In contrast, there are few people who
have both the background and interest (much less leeway from their
employers in the case of paid contributors) to work on any random
feature of any random project and be quickly effective at it. If
you're familiar with, say, Linux kernel development, you see very
much the same sort of specialization with developers who are
familiar with specific kernel subsystems and vanishingly few who can
efficiently (that is to say without investing lots of time to come
up to speed) implement novel features in multiple unrelated
subsystems.

We'll all continue to work on get better at this, but hard things
are... well... hard.
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?

2018-08-21 Thread Fox, Kevin M
The stuff you are pushing back against are the very same things that other 
folks are trying to do at a higher level. 

You want control so you can prioritize the things you feel your users are most 
interested in. Folks in other projects have decided the same. Really, where 
should the priorities come from?

You are concerned another projects priorities will trump your own. Legitimate. 
But have you considered, maybe other priorities, not just Nova's actually are 
more important in the grand scheme of OpenStack? What entity in OpenStack is 
deciding the operators/users needs get what priorities? Nova currently thinks 
it knows whats best. Is it really?

I've wanted shared storage for a long long time. But i also have wanted proper 
secret management, and between the two, I'd much rather have good secret 
management. Where is that vote in things? How do I even express that? And, to 
whom?

Yes, I realize shared storage was Cinders priority and Nova's now way behind in 
implementing it. so it is kind of a priority to get it done. Just using it as 
an example though in the bigger context.

Having operators approach individual projects stating their needs, and then 
having the individual projects fight it out for priorities isn't a good plan. 
The priorities should be prioritized at a higher level then projects so the 
operators/users needs can be seen in a global light, not just filtered though 
each projects views of things.

Yes, some folks volunteer to work on the things that they want to work on. 
Thats great. But some folks volunteer to work on priorities to help 
users/operators in general. Getting clear, unbiased priorities for them is 
really important.

I'm not trying to pick on you here. I truly believe you are trying to do the 
right thing for your users/operators. And for that, I thank you. But I'm a 
user/operator too and have had a lot of issues ignored due to this kind of 
governance issue preventing traction on my own user/operator needs. And I'm 
sure there are others besides me too. Its not due to malice. But the governance 
structure we have in place is letting important things slip through the cracks 
because its setup walls that make it hard to see the bigger picture. This email 
thread has been exposing some of the walls, and thats why we've been talking 
about them. To try and fix it.

Thanks,
Kevin


From: melanie witt [melwi...@gmail.com]
Sent: Tuesday, August 21, 2018 3:05 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] [nova] [placement] placement below or beside 
compute after extraction?

On Tue, 21 Aug 2018 16:41:11 -0400, Doug Hellmann wrote:
> Excerpts from melanie witt's message of 2018-08-21 12:53:43 -0700:
>> On Tue, 21 Aug 2018 06:50:56 -0500, Matt Riedemann wrote:
>>> At this point, I think we're at:
>>>
>>> 1. Should placement be extracted into it's own git repo in Stein while
>>> nova still has known major issues which will have dependencies on
>>> placement changes, mainly modeling affinity?
>>>
>>> 2. If we extract, does it go under compute governance or a new project
>>> with a new PTL.
>>>
>>> As I've said, I personally believe that unless we have concrete plans
>>> for the big items in #1, we shouldn't hold up the extraction. We said in
>>> Dublin we wouldn't extract to a new git repo in Rocky but we'd work up
>>> to that point so we could do it in Stein, so this shouldn't surprise
>>> anyone. The actual code extraction and re-packaging and all that is
>>> going to be the biggest technical issue with all of this, and will
>>> likely take all of stein to complete it after all the bugs are shaken out.
>>>
>>> For #2, I think for now, in the interim, while we deal with the
>>> technical headache of the code extraction itself, it's best to leave the
>>> new repo under compute governance so the existing team is intact and we
>>> don't conflate the people issue with the technical issue at the same
>>> time. Get the hard technical part done first, and then we can move it
>>> out of compute governance. Once it's in its own git repo, we can change
>>> the core team as needed but I think it should be initialized with
>>> existing nova-core.
>>
>> I'm in support of extracting placement into its own git repo because
>> Chris has done a lot of work to reduce dependencies in placement and
>> moving it into its own repo would help in not having to keep chasing
>> that. As has been said before, I think all of us agree that placement
>> should be separate as an end goal. The question is when to fully
>> separate it from governance.
>>
>> It's true that we don't have concrete plans for affinity modeling and
>> shared storage modeling. But I think we do have concrete plans for vGPU
>> enhancements (being able to have different vGPU types on one compute
>> host and adding support for traits). vGPU support is an important and
>> highly sought after feature for operators and users, as we witnessed at
>> the last Summit 

Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?

2018-08-21 Thread Fox, Kevin M
Heh. And some things don't change...

Having a large project such as OpenStack, made up of large numbers of 
volunteers, each with their own desires means it will be impossible to make 
everyone happy all of the time.

For the good of the community, the community needs to decide on a common 
direction, and sometimes individuals need to be asked to go against their own 
desires for the betterment of the entire community. Yes, that risks an 
individual contributor leaving. But if it really is in the best interest of the 
community, others will continue on.

We've ignored that for so long, we've built a huge system on letting 
individuals set their own course without common direction and with their own 
desires. The projects don't integrate as well as they should, the whole of 
OpenStack gets overly complex and unwieldy to use or worse, large gaps in user 
needed functionality, and users end up leaving.

I'm really sure at this point that you can't have a project as large as 
OpenStack without leadership setting a course and sometimes making hard choices 
for the betterment of the whole. That doesn't mean a benevolent dictator. But 
our self govened model with elected officials should be a good balance. If they 
are too unreasonable, they don't get reelected. But not leading isn't an option 
either anymore.

Thanks,
Kevin

From: Jeremy Stanley [fu...@yuggoth.org]
Sent: Tuesday, August 21, 2018 9:53 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [nova] [placement] placement below or beside 
compute after extraction?

On 2018-08-21 16:38:41 + (+), Fox, Kevin M wrote:
[...]
> You need someone like the TC to be able to step in, in those cases
> to help sort that kind of issue out. In the past, the TC was not
> willing to do so. My gut feeling though is that is finally
> changing.
[...]

To be clear, it's not that TC members are unwilling to step into
these discussions. Rather, it's that most times when a governing
body has to tell volunteers to do something they don't want to do,
it tends to not be particularly helpful in solving the underlying
disagreement.
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] [nova] [placement] placement below or beside compute after extraction?

2018-08-21 Thread Fox, Kevin M
So, nova's worried about having to be in the boat many of us have been in where 
they depend on another project not recognizing their important use cases? heh...

ok, so, yeah. that is a legitimate concern. You need someone like the TC to be 
able to step in, in those cases to help sort that kind of issue out. In the 
past, the TC was not willing to do so. My gut feeling though is that is finally 
changing. This is a bigger problem then just Nova, so getting a proper 
procedure in place to handle this is really important for OpenStack in general. 
Lets solve that rather then one offing a solution by keeping placement under 
Nova's control.

How do I say this nicely Better to talk about it instead of continuing to 
ignore the hard issues I guess. Nova has been very self centered compared to 
other projects in the tent. This thread feels like more of the same. If 
OpenStack as a whole is to get healthier, we need to stop having selfish 
projects and encourage ones that help each other.

I think splitting out placement from Nova's control has at least two benefits
1. Nova has complained a lot about having too much code so they can't take 
other projects requests. This reduces Nova's code base so they can focus on 
their core functionality, and more importantly, their users use cases. This 
will make OpenStack as a whole, healthier.
2. It reduces Nova's special project status leveling the playing field a bit. 
Nova can help influence the TC to solving difficult cross project problems 
along with the rest of us rather then going off and doing things on their own.

Thanks,
Kevin

From: Matt Riedemann [mriede...@gmail.com]
Sent: Monday, August 20, 2018 6:23 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] [nova] [placement] placement below or beside 
compute after extraction?

On 8/20/2018 8:08 PM, Matt Riedemann wrote:
> On 8/20/2018 6:42 PM, Ed Leafe wrote:
>> It was said in the #openstack-tc discussions, but for those on the
>> mailing list, the biggest concern among the Nova core developers is
>> that the consensus among Placement cores will certainly not align with
>> the needs of Nova. I personally think that's ridiculous, and, as one
>> of the very opinionated people involved, a bit insulting. No one wants
>> to see either Nova or Placement to fail.
>
> I believe you're paraphrasing what I said, and I never said I was
> speaking for all nova core developers. I don't think anyone working on
> placement would intentionally block things nova needs or try to see nova
> fail.

Here is an example of the concern. In Sydney we talked about adding
types to the consumers resource in placement so that nova could use
placement for counting quotas [1]. Chris considered it a weird hack but
it's pretty straight-forward from a nova consumption point of view. So
if placement were separately governed with let's say Chris as PTL, would
something like that become a holy war type issue because it's "weird"
and convolutes the desire for a minimalist API? I think Chris' stance on
this particular item has softened over time as more of a "meh" but it's
a worry about extracting with a separate team that is against changes
because they are not ideal for Placement yet are needed for a consumer
of Placement. I understand this is likely selfish on the part of the
nova people that want this (including myself) and maybe close-minded to
alternative solutions to the problem (I'm not sure if it's all been
thought out end-to-end yet, Mel would likely know the latest on this
item). Anyway, I like to have examples when I'm stating something to
gain understanding, so that's what I'm trying to do here - explain, with
an example, what I said in the tc channel discussion today.

[1] Line 55 https://etherpad.openstack.org/p/SYD-forum-nova-placement-update

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [PTL][TC] Stein Cycle Goals

2018-08-13 Thread Fox, Kevin M
Since the upgrade checking has not been written yet, now would be a good time 
to unify them, so you upgrade check your openstack upgrade, not status check 
nova, status check neutron, status check glance, status check cinder . ad 
nauseam.

Thanks,
Kevin

From: Sean McGinnis [sean.mcgin...@gmx.com]
Sent: Monday, August 13, 2018 8:22 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [PTL][TC] Stein Cycle Goals

We now have two cycle goals accepted for the Stein cycle. I think both are very
beneficial goals to work towards, so personally I am very happy with where we
landed on this.

The two goals, with links to their full descriptions and nitty gritty details,
can be found here:

https://governance.openstack.org/tc/goals/stein/index.html

Goals
=
Here are some high level details on the goals.

Run under Python 3 by default (python3-first)
-
In Pike we had a goal for all projects to support Python 3.5. As a continuation
of that effort, and in preparation for the EOL of Python 2, we now want to look
at all of the ancillary things around projects and make sure that we are using
Python 3 everywhere except those jobs explicitly intended for testing Python 2
support.

This means all docs, linters, and other tools and utility jobs we use should be
run using Python 3.

https://governance.openstack.org/tc/goals/stein/python3-first.html

Thanks to Doug Hellmann, Nguyễn Trí Hải, Ma Lei, and Huang Zhiping for
championing this goal.

Support Pre Upgrade Checks (upgrade-checkers)
-
One of the hot topics we've been discussing for some time at Forum and PTG
events has been making upgrades better. To that end, we want to add tooling for
each service to provide an "upgrade checker" tool that can check for various
known issues so we can either give operators some assurance that they are ready
to upgrade, or to let them know if some step was overlooked that will need to
be done before attempting the upgrade.

This goal follows the Nova `nova-status upgrade check` command precendent to
make it a consistent capability for each service. The checks should look for
things like missing or changed configuration options, incompatible object
states, or other conditions that could lead to failures upgrading that project.

More details can be found in the goal:

https://governance.openstack.org/tc/goals/stein/upgrade-checkers.html

Thanks to Matt Riedemann for championing this goal.

Schedule

We hope to have all projects complete this goal by the week of March 4, 2019:

https://releases.openstack.org/stein/schedule.html

This is the same week as the Stein-3 milestone, as well as Feature Freeze and
client lib freeze.

Future Goals

We welcome any ideas for future cycle goals. Ideally these should be things
that can actually be accomplished within one development cycle and would have a
positive, and hopefully visible, impact for users and operators.

Feel free to pitch any ideas here on the mailing list or drop by the
#openstack-tc channel at any point.

Thanks!

--
Sean (smcginnis)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-19 Thread Fox, Kevin M
The primary issue I think is that  the Nova folks think there is too much in 
Nova already.

So there are probably more features that can be done to make it more in line 
with vCenter, and more features to make it more functionally like AWS. And at 
this point, neither are probably easy to get in.

Until Nova changes this stance, they are kind of forcing an either or (or 
neither), as Nova's position in the OpenStack community currently drives 
decisions in most of the other OpenStack projects.

I'm not laying blame on anyone. They have a hard job to do and not enough 
people to do it. That forces less then ideal solutions.

Not really sure how to resolve this.

Deciding "we will support both" is a good first step, but there are other big 
problems like this that need solving before it can be more then words on a page.

Thanks,
Kevin


From: Thierry Carrez [thie...@openstack.org]
Sent: Thursday, July 19, 2018 5:24 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

Zane Bitter wrote:
> [...]
>> And I'm not convinced that's an either/or choice...
>
> I said specifically that it's an either/or/and choice.

I was speaking more about the "we need to pick between two approaches,
let's document them" that the technical vision exercise started as.
Basically I mean I'm missing clear examples of where pursuing AWS would
mean breaking vCenter.

> So it's not a binary choice but it's very much a ternary choice IMHO.
> The middle ground, where each project - or even each individual
> contributor within a project - picks an option independently and
> proceeds on the implicit assumption that everyone else chose the same
> option (although - spoiler alert - they didn't)... that's not a good
> place to be.

Right, so I think I'm leaning for an "and" choice.

Basically OpenStack wants to be an AWS, but ended up being used a lot as
a vCenter (for multiple reasons, including the limited success of
US-based public cloud offerings in 2011-2016). IMHO we should continue
to target an AWS, while doing our best to not break those who use it as
a vCenter. Would explicitly acknowledging that (we still want to do an
AWS, but we need to care about our vCenter users) get us the alignment
you seek ?

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-17 Thread Fox, Kevin M
Inlining with KF> 

From: Thierry Carrez [thie...@openstack.org]
Sent: Tuesday, July 17, 2018 7:44 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

Finally found the time to properly read this...

Zane Bitter wrote:
> [...]
> We chose to add features to Nova to compete with vCenter/oVirt, and not
> to add features the would have enabled OpenStack as a whole to compete
> with more than just the compute provisioning subset of EC2/Azure/GCP.

Could you give an example of an EC2 action that would be beyond the
"compute provisioning subset" that you think we should have built into
Nova ?

KF> How about this one... 
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html
 :/
KF> IMO, its lack really crippled the use case. I've been harping on this one 
for over 4 years now...

> Meanwhile, the other projects in OpenStack were working on building the
> other parts of an AWS/Azure/GCP competitor. And our vague one-sentence
> mission statement allowed us all to maintain the delusion that we were
> all working on the same thing and pulling in the same direction, when in
> truth we haven't been at all.

Do you think that organizing (tying) our APIs along [micro]services,
rather than building a sanely-organized user API on top of a
sanely-organized set of microservices, played a role in that divide ?

KF> Slightly off question I think. A combination of microservice api's + no api 
team to look at the api's as a whole allowed use cases to slip by.
KF> Microservice api might have been ok with overall shepards. Though maybe 
that is what you were implying with 'sanely'?

> We can decide that we want to be one, or the other, or both. But if we
> don't all decide together then a lot of us are going to continue wasting
> our time working at cross-purposes.

If you are saying that we should choose between being vCenter or AWS, I
would definitely say the latter. But I'm still not sure I see this issue
in such a binary manner.

KF> No, he said one, the either, or both. But the lack of decision allowed some 
teams to prioritize one without realizing its effects to others.

KF> There are multiple vCenter replacements in opensource world. For example, 
oVirt. Its alreay way better at it then Nova.
KF> There is not a replacement for AWS in the opensource world. The hope was 
OpenStack would be that, but others in the community did not agree with that 
vision.
KF> Now that the community has changed drastically, what is the feeling now? We 
must decide.
KF> Kubernetes has provided a solid base for doing cloudy things. Which is 
great. But the organization does not care to replace other AWS/Azure/etc 
services because there are companies interested in selling k8s on top of 
AWS/Azure/etc and integrate with the other services they already provide.
KF> So, there is an Opportunity in the opensource community still for someone 
to write an opensource AWS alternative. VM's are just a very small part of it.

KF> Is that OpenStack, or some other project?

Imagine if (as suggested above) we refactored the compute node and give
it a user API, would that be one, the other, both ? Or just a sane
addition to improve what OpenStack really is today: a set of open
infrastructure components providing different services with each their
API, with slight gaps and overlaps between them ?

Personally, I'm not very interested in discussing what OpenStack could
have been if we started building it today. I'm much more interested in
discussing what to add or change in order to make it usable for more use
cases while continuing to serve the needs of our existing users. And I'm
not convinced that's an either/or choice...

KF> Sometimes it is time to hit the reset button because you either:
 a> you know something more then you did when you built that is really important
 b> the world changed and you can no longer going on the path you were
 c> the technical debt has grown very large and it is cheaper to start again

KF> OpenStacks current architectural implementation really feels 1.0ish to me 
and all of those reasons are relevant.
KF> I'm not saying we should just blindly hit the reset button. but I think it 
should be discussed/evaluated . Leaving it alone may have too much of a 
dragging effect on contribution.

KF> I'm also not saying we leave existing users without a migration path 
either. Maybe an OpenStack 2.0 with migration tools would be an option.

KF> OpenStacks architecture is really hamstringing it at this point. If it 
wants to make progress at chipping away at AWS, it can't be trying to build on 
top of the very narrow commons OpenStack provides at present and the boiler 
plate convention of 1, start new project 2, create sql databse, 3, create 
rabbit queues, 5, create api service, 6 create scheduler service, 7, create 
agents, 9, create keystone endpoints, 10, get it wrapped in 32 different 
deployment tools, 11, etc

Thanks,

Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-05 Thread Fox, Kevin M
Interesting. Thanks for the link. :)

There is a lot of stuff there, so not sure it covers the part I'm talking about 
without more review. but if it doesn't it would be pretty easy to add by the 
looks of it.

Kevin

From: Jeremy Stanley [fu...@yuggoth.org]
Sent: Thursday, July 05, 2018 10:47 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

On 2018-07-05 17:30:23 + (+), Fox, Kevin M wrote:
[...]
> Deploying k8s doesn't need a general solution to deploying generic
> base OS's. Just enough OS to deploy K8s and then deploy everything
> on top in containers. Deploying a seed k8s with minikube is pretty
> trivial. I'm not suggesting a solution here to provide generic
> provisioning to every use case in the datacenter. But enough to
> get a k8s based cluster up and self hosted enough where you could
> launch other provisioning/management tools in that same cluster,
> if you need that. It provides a solid base for the datacenter on
> which you can easily add the services you need for dealing with
> everything.
>
> All of the microservices I mentioned can be wrapped up in a single
> helm chart and deployed with a single helm install command.
>
> I don't have permission to release anything at the moment, so I
> can't prove anything right now. So, take my advice with a grain of
> salt. :)
[...]

Anything like http://www.airshipit.org/ ?
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-05 Thread Fox, Kevin M
I use RDO in production. Its pretty far from RedHat OpenStack. though its been 
a while since I tried the TripleO part of RDO. Is it pretty well integrated 
now? Similar to RedHat OpenStack? or is it more Fedora like then CentOS like?

Thanks,
Kevin

From: Dmitry Tantsur [dtant...@redhat.com]
Sent: Thursday, July 05, 2018 11:17 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26




On Thu, Jul 5, 2018, 19:31 Fox, Kevin M 
mailto:kevin@pnnl.gov>> wrote:
We're pretty far into a tangent...

/me shrugs. I've done it. It can work.

Some things your right. deploying k8s is more work then deploying ansible. But 
what I said depends on context. If your goal is to deploy k8s/manage k8s then 
having to learn how to use k8s is not a big ask. adding a different tool such 
as ansible is an extra cognitive dependency. Deploying k8s doesn't need a 
general solution to deploying generic base OS's. Just enough OS to deploy K8s 
and then deploy everything on top in containers. Deploying a seed k8s with 
minikube is pretty trivial. I'm not suggesting a solution here to provide 
generic provisioning to every use case in the datacenter. But enough to get a 
k8s based cluster up and self hosted enough where you could launch other 
provisioning/management tools in that same cluster, if you need that. It 
provides a solid base for the datacenter on which you can easily add the 
services you need for dealing with everything.

All of the microservices I mentioned can be wrapped up in a single helm chart 
and deployed with a single helm install command.

I don't have permission to release anything at the moment, so I can't prove 
anything right now. So, take my advice with a grain of salt. :)

Switching gears, you said why would users use lfs when they can use a distro, 
so why use openstack without a distro. I'd say, today unless you are paying a 
lot, there isn't really an equivalent distro that isn't almost as much effort 
as lfs when you consider day2 ops. To compare with Redhat again, we have a RHEL 
(redhat openstack), and Rawhide (devstack) but no equivalent of CentOS. Though 
I think TripleO has been making progress on this front...

It's RDO what you're looking for (equivalent of centos). TripleO is an 
installer project, not a distribution.


Anyway. This thread is I think 2 tangents away from the original topic now. If 
folks are interested in continuing this discussion, lets open a new thread.

Thanks,
Kevin


From: Dmitry Tantsur [dtant...@redhat.com<mailto:dtant...@redhat.com>]
Sent: Wednesday, July 04, 2018 4:24 AM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

Tried hard to avoid this thread, but this message is so much wrong..

On 07/03/2018 09:48 PM, Fox, Kevin M wrote:
> I don't dispute trivial, but a self hosting k8s on bare metal is not 
> incredibly hard. In fact, it is easier then you might think. k8s is a 
> platform for deploying/managing services. Guess what you need to provision 
> bare metal? Just a few microservices. A dhcp service. dhcpd in a daemonset 
> works well. some pxe infrastructure. pixiecore with a simple http backend 
> works pretty well in practice. a service to provide installation 
> instructions. nginx server handing out kickstart files for example. and a 
> place to fetch rpms from in case you don't have internet access or want to 
> ensure uniformity. nginx server with a mirror yum repo. Its even possible to 
> seed it on minikube and sluff it off to its own cluster.
>
> The main hard part about it is currently no one is shipping a reference 
> implementation of the above. That may change...
>
> It is certainly much much easier then deploying enough OpenStack to get a 
> self hosting ironic working.

Side note: no, it's not. What you describe is similarly hard to installing
standalone ironic from scratch and much harder than using bifrost for
everything. Especially when you try to do it in production. Especially with
unusual operating requirements ("no TFTP servers on my network").

Also, sorry, I cannot resist:
"Guess what you need to orchestrate containers? Just a few things. A container
runtime. Docker works well. some remove execution tooling. ansible works pretty
well in practice. It is certainly much much easier then deploying enough k8s to
get a self hosting containers orchestration working."

Such oversimplications won't bring us anywhere. Sometimes things are hard
because they ARE hard. Where are people complaining that installing a full
GNU/Linux distributions from upstream tarballs is hard? How many operators here
use LFS as their distro? If we are okay with using a distro for GNU/Linux, why
using a distro for OpenStack causes so much contention?

>

Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-05 Thread Fox, Kevin M
We're pretty far into a tangent...

/me shrugs. I've done it. It can work.

Some things your right. deploying k8s is more work then deploying ansible. But 
what I said depends on context. If your goal is to deploy k8s/manage k8s then 
having to learn how to use k8s is not a big ask. adding a different tool such 
as ansible is an extra cognitive dependency. Deploying k8s doesn't need a 
general solution to deploying generic base OS's. Just enough OS to deploy K8s 
and then deploy everything on top in containers. Deploying a seed k8s with 
minikube is pretty trivial. I'm not suggesting a solution here to provide 
generic provisioning to every use case in the datacenter. But enough to get a 
k8s based cluster up and self hosted enough where you could launch other 
provisioning/management tools in that same cluster, if you need that. It 
provides a solid base for the datacenter on which you can easily add the 
services you need for dealing with everything.

All of the microservices I mentioned can be wrapped up in a single helm chart 
and deployed with a single helm install command.

I don't have permission to release anything at the moment, so I can't prove 
anything right now. So, take my advice with a grain of salt. :)

Switching gears, you said why would users use lfs when they can use a distro, 
so why use openstack without a distro. I'd say, today unless you are paying a 
lot, there isn't really an equivalent distro that isn't almost as much effort 
as lfs when you consider day2 ops. To compare with Redhat again, we have a RHEL 
(redhat openstack), and Rawhide (devstack) but no equivalent of CentOS. Though 
I think TripleO has been making progress on this front...

Anyway. This thread is I think 2 tangents away from the original topic now. If 
folks are interested in continuing this discussion, lets open a new thread.

Thanks,
Kevin


From: Dmitry Tantsur [dtant...@redhat.com]
Sent: Wednesday, July 04, 2018 4:24 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

Tried hard to avoid this thread, but this message is so much wrong..

On 07/03/2018 09:48 PM, Fox, Kevin M wrote:
> I don't dispute trivial, but a self hosting k8s on bare metal is not 
> incredibly hard. In fact, it is easier then you might think. k8s is a 
> platform for deploying/managing services. Guess what you need to provision 
> bare metal? Just a few microservices. A dhcp service. dhcpd in a daemonset 
> works well. some pxe infrastructure. pixiecore with a simple http backend 
> works pretty well in practice. a service to provide installation 
> instructions. nginx server handing out kickstart files for example. and a 
> place to fetch rpms from in case you don't have internet access or want to 
> ensure uniformity. nginx server with a mirror yum repo. Its even possible to 
> seed it on minikube and sluff it off to its own cluster.
>
> The main hard part about it is currently no one is shipping a reference 
> implementation of the above. That may change...
>
> It is certainly much much easier then deploying enough OpenStack to get a 
> self hosting ironic working.

Side note: no, it's not. What you describe is similarly hard to installing
standalone ironic from scratch and much harder than using bifrost for
everything. Especially when you try to do it in production. Especially with
unusual operating requirements ("no TFTP servers on my network").

Also, sorry, I cannot resist:
"Guess what you need to orchestrate containers? Just a few things. A container
runtime. Docker works well. some remove execution tooling. ansible works pretty
well in practice. It is certainly much much easier then deploying enough k8s to
get a self hosting containers orchestration working."

Such oversimplications won't bring us anywhere. Sometimes things are hard
because they ARE hard. Where are people complaining that installing a full
GNU/Linux distributions from upstream tarballs is hard? How many operators here
use LFS as their distro? If we are okay with using a distro for GNU/Linux, why
using a distro for OpenStack causes so much contention?

>
> Thanks,
> Kevin
>
> 
> From: Jay Pipes [jaypi...@gmail.com]
> Sent: Tuesday, July 03, 2018 10:06 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26
>
> On 07/02/2018 03:31 PM, Zane Bitter wrote:
>> On 28/06/18 15:09, Fox, Kevin M wrote:
>>>* made the barrier to testing/development as low as 'curl
>>> http://..minikube; minikube start' (this spurs adoption and
>>> contribution)
>>
>> That's not so different from devstack though.
>>
>>>* not having large silo's in deployment projects allowed better
>>> communication on common tooling.
>>>* Operat

Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-03 Thread Fox, Kevin M
Replying inline in outlook. Sorry. :( Prefixing with KF>

-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com] 
Sent: Tuesday, July 03, 2018 1:04 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

I'll answer inline, so that it's easier to understand what part of your 
message I'm responding to.

On 07/03/2018 02:37 PM, Fox, Kevin M wrote:
> Yes/no on the vendor distro thing. They do provide a lot of options, but they 
> also provide a fully k8s tested/provided route too. kubeadm. I can take linux 
> distro of choice, curl down kubeadm and get a working kubernetes in literally 
> a couple minutes.

How is this different from devstack?

With both approaches:

* Download and run a single script
* Any sort of networking outside of super basic setup requires manual 
intervention
* Not recommended for "production"
* Require workarounds when running as not-root

Is it that you prefer the single Go binary approach of kubeadm which 
hides much of the details that devstack was designed to output (to help 
teach people what's going on under the hood)?

KF> so... go to https://docs.openstack.org/devstack/latest/ and one of the 
first things you see is a bright red Warning box. Don't run it on your laptop. 
It also targets git master rather then production releases so it is more 
targeted at developing on openstack itself rather then developers developing 
their software to run in openstack. My common use case was developing stuff to 
run in, not developing openstack itself. minikube makes this case first class. 
Also, it requires a linux box to deploy it. Minikube works on macos and windows 
as well. Yeah, not really an easy thing to do, but it does it pretty well. I 
did a presentation on Kubernetes once, put up a slide on minikube, and 5 slides 
later, one of the physicists in the room said, btw, I have it working on my mac 
(personal laptop). Not trying to slam devstack. It really is a good piece of 
software. but it still has a ways to go to get to that point. And lastly, 
minikube's default bootstrapper these days is kubeadm. So the kubernetes you 
get to develop against is REALLY close to one you could deploy yourself at 
scale in vms or on bare metal. The tools/containers it uses are byte identical. 
They will behave the same. Devstack is very different then most production 
deployments.

 >
  No compiling anything or building containers. That is what I mean when 
I say they have a product.

What does devstack compile?

By "compile" are you referring to downloading code from git 
repositories? Or are you referring to the fact that with kubeadm you are 
downloading a Go binary that hides the downloading and installation of 
all the other Kubernetes images for you [1]?

KF> The go binary orchestrates a bit, but for the most part, you get one system 
package installed (or use one statically linked binary) kubelet. From there, 
you switch to using prebuilt containers for all the other services. Those 
binaries have been through a build / test/ release pipeline and are guaranteed 
to be the same between all the nodes you install them on. It is easy to run a 
deployment on your test cluster, and ensure it works the same way on your 
production system. You can do the same with say rpms, but then you need to 
build up plumbing to mirror your rpms and plumbing to promote from testing to 
production, etc. Then you have to configure all the nodes to not accidently 
pull from a remote rpm mirror. Some of the system updates try really hard to 
reenable that. :/ K8s gives you easy testing/promotion by the way they tag 
things and prebuild stuff for you. So you just tweak your k8s version and off 
go. You don't have to mirror if you don't want to. Lower barrier to entry there.

[1] 
https://github.com/kubernetes/kubernetes/blob/8d73473ce8118422c9e0c2ba8ea669ebbf8cee1c/cmd/kubeadm/app/cmd/init.go#L267
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/images/images.go#L63

 > Other vendors provide their own builds, release tooling, or config 
management integration. which is why that list is so big.  But it is up 
to the Operators to decide the route and due to k8s having a very clean, 
easy, low bar for entry it sets the bar for the other products to be 
even better.

I fail to see how devstack and kubeadm aren't very much in the same vein?

KF> You've switched from comparing devstack and minikube to devstack and 
kubeadm. Kubeadm is plumbing to build dev, test, and production systems. 
Devstack is very much only ever intended for the dev phase. And like I said 
before, a little more focused on the dev of openstack itself, not of deving 
code running in it. Minikube is really intended to allow devs to develop 
software to run inside k8s and behave as much as possible to a full k8s cluster.

> The reason people started adopting clouds was because it was very quick to 
> request resou

Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-03 Thread Fox, Kevin M
I don't dispute trivial, but a self hosting k8s on bare metal is not incredibly 
hard. In fact, it is easier then you might think. k8s is a platform for 
deploying/managing services. Guess what you need to provision bare metal? Just 
a few microservices. A dhcp service. dhcpd in a daemonset works well. some pxe 
infrastructure. pixiecore with a simple http backend works pretty well in 
practice. a service to provide installation instructions. nginx server handing 
out kickstart files for example. and a place to fetch rpms from in case you 
don't have internet access or want to ensure uniformity. nginx server with a 
mirror yum repo. Its even possible to seed it on minikube and sluff it off to 
its own cluster.

The main hard part about it is currently no one is shipping a reference 
implementation of the above. That may change...

It is certainly much much easier then deploying enough OpenStack to get a self 
hosting ironic working.

Thanks,
Kevin 


From: Jay Pipes [jaypi...@gmail.com]
Sent: Tuesday, July 03, 2018 10:06 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

On 07/02/2018 03:31 PM, Zane Bitter wrote:
> On 28/06/18 15:09, Fox, Kevin M wrote:
>>   * made the barrier to testing/development as low as 'curl
>> http://..minikube; minikube start' (this spurs adoption and
>> contribution)
>
> That's not so different from devstack though.
>
>>   * not having large silo's in deployment projects allowed better
>> communication on common tooling.
>>   * Operator focused architecture, not project based architecture.
>> This simplifies the deployment situation greatly.
>>   * try whenever possible to focus on just the commons and push vendor
>> specific needs to plugins so vendors can deal with vendor issues
>> directly and not corrupt the core.
>
> I agree with all of those, but to be fair to OpenStack, you're leaving
> out arguably the most important one:
>
>  * Installation instructions start with "assume a working datacenter"
>
> They have that luxury; we do not. (To be clear, they are 100% right to
> take full advantage of that luxury. Although if there are still folks
> who go around saying that it's a trivial problem and OpenStackers must
> all be idiots for making it look so difficult, they should really stop
> embarrassing themselves.)

This.

There is nothing trivial about the creation of a working datacenter --
never mind a *well-running* datacenter. Comparing Kubernetes to
OpenStack -- particular OpenStack's lower levels -- is missing this
fundamental point and ends up comparing apples to oranges.

Best,
-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-03 Thread Fox, Kevin M
Heh. your not going to like it. :)

The very fastest path I can think of but is super disruptive is the following 
(there are also less disruptive paths):

First, define what OpenStack will be. If you don't know, you easily run into 
people working across purposes. Maybe there are other things that will be 
sister projects. that's fine. But it needs to be a whole product/project, not 
split on interests. think k8s sigs not openstack projects. The final result is 
a singular thing though. k8s x.y.z. openstack iaas 2.y.z or something like that.

Have a look at what KubeVirt is doing. I think they have the right approach.

Then, define K8s to be part of the commons. They provide a large amount of 
functionality OpenStack needs in the commons. If it is there, you can reuse it 
and not reinvent it.

Implement a new version of each OpenStack services api on top of K8s api using 
CRD's. At the same time, as we now  defined what OpenStack will be, ensure the 
API has all the base use cases covered.

Provide a rest service -> crd adapter to enable backwards compatibility with 
older OpenStack api versions.

This completely removes statefullness from OpenStack services.

Rather then have a dozen databases you have just an etcd system under the hood. 
It provides locking, and events as well. so no oslo.locking backing service, no 
message queue, no sql databases. This GREATLY simplifies what the operators 
need to do. This removes a lot of code too. Backups are simpler as there is 
only one thing. Operators life is drastically simpler.

upgrade tools should be unified. you upgrade your openstack deployment, not 
upgrade nova, upgrade glance, upgrade neutron, ..., etc

Config can be easier as you can ship config with the same mechanism. Currently 
the operator tries to define cluster config and it gets twisted and split up 
per project/per node/sub component.

Service account stuff is handled by kubernetes service accounts. so no rpc over 
amqp security layer and shipping around credentials manually in config files, 
and figuring out how to roll credentials, etc. agent stuff is much simpler. 
less code.

Provide prebuilt containers for all of your components and some basic tooling 
to deploy it on a k8s. K8s provides a lot of tooling here. We've been building 
it over and over in deployment tools. we can get rid of most of it.

Use http for everything. We all have acknowledged we have been torturing rabbit 
for a while. but its still a critical piece of infrastructure at the core 
today. We need to stop.

Provide a way to have a k8s secret poked into a vm.

I could go on, but I think there is enough discussion points here already. And 
I wonder if anyone made it this far without their head exploding already. :)

Thanks,
Kevin





From: Jay Pipes [jaypi...@gmail.com]
Sent: Monday, July 02, 2018 2:45 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

On 07/02/2018 03:12 PM, Fox, Kevin M wrote:
> I think a lot of the pushback around not adding more common/required services 
> is the extra load it puts on ops though. hence these:
>>   * Consider abolishing the project walls.
>>   * simplify the architecture for ops
>
> IMO, those need to change to break free from the pushback and make progress 
> on the commons again.

What *specifically* would you do, Kevin?

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-03 Thread Fox, Kevin M
Yes/no on the vendor distro thing. They do provide a lot of options, but they 
also provide a fully k8s tested/provided route too. kubeadm. I can take linux 
distro of choice, curl down kubeadm and get a working kubernetes in literally a 
couple minutes. No compiling anything or building containers. That is what I 
mean when I say they have a product. Other vendors provide their own builds, 
release tooling, or config management integration. which is why that list is so 
big.  But it is up to the Operators to decide the route and due to k8s having a 
very clean, easy, low bar for entry it sets the bar for the other products to 
be even better.

The reason people started adopting clouds was because it was very quick to 
request resources.  One of clouds features (some say drawbacks) vs VM farms has 
been ephemeralness. You build applications on top of VMs to provide a Service 
to your Users. Great. Things like Containers though launch much faster and have 
generally more functionality for plumbing them together then VMs do though. So 
these days containers are out clouding vms at this use case. So, does Nova 
continue to be cloudy vm or does it go for the more production vm use case like 
oVirt and VMware? Without strong orchestration of some kind on top the cloudy 
use case is also really hard with Nova. So we keep getting into this tug of war 
between people wanting VM's as a building blocks of cloud scale applications, 
and those that want Nova to be an oVirt/VMware replacement. Honestly, its not 
doing either use case great because it cant decide what to focus on.
oVirt is a better VMware alternative today then Nova is, having used it. It 
focuses specifically on the same use cases. Nova is better at being a cloud 
then oVirt and VMware. but lags behind Azure/AWS a lot when it comes to having 
apps self host on it. (progress is being made again finally. but its slow)

While some people only ever consider running Kubernetes on top of a cloud, some 
of us realize maintaining both a cloud an a kubernetes is unnecessary and can 
greatly simplify things simply by running k8s on bare metal. This does then 
make it a competitor to Nova  as a platform for running workload on. As k8s 
gains more multitenancy features, this trend will continue to grow I think. 
OpenStack needs to be ready for when that becomes a thing.

Heat is a good start for an orchestration system, but it is hamstrung by it 
being an optional component, by there still not being a way to download secrets 
to a vm securely from the secret store, by the secret store also being 
completely optional, etc. An app developer can't rely on any of it. :/ Heat is 
hamstrung by the lack of blessing so many other OpenStack services are. You 
can't fix it until you fix that fundamental brokenness in OpenStack.

Heat is also hamstrung being an orchestrator of existing API's by there being 
holes in the API's.

Think of OpenStack like a game console. The moment you make a component 
optional and make it takes extra effort to obtain, few software developers 
target it and rarely does anyone one buy the addons it because there isn't 
software for it. Right now, just about everything in OpenStack is an addon. 
Thats a problem.

Thanks,
Kevin



From: Jay Pipes [jaypi...@gmail.com]
Sent: Monday, July 02, 2018 4:13 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

On 06/27/2018 07:23 PM, Zane Bitter wrote:
> On 27/06/18 07:55, Jay Pipes wrote:
>> Above, I was saying that the scope of the *OpenStack* community is
>> already too broad (IMHO). An example of projects that have made the
>> *OpenStack* community too broad are purpose-built telco applications
>> like Tacker [1] and Service Function Chaining. [2]
>>
>> I've also argued in the past that all distro- or vendor-specific
>> deployment tools (Fuel, Triple-O, etc [3]) should live outside of
>> OpenStack because these projects are more products and the relentless
>> drive of vendor product management (rightfully) pushes the scope of
>> these applications to gobble up more and more feature space that may
>> or may not have anything to do with the core OpenStack mission (and
>> have more to do with those companies' product roadmap).
>
> I'm still sad that we've never managed to come up with a single way to
> install OpenStack. The amount of duplicated effort expended on that
> problem is mind-boggling. At least we tried though. Excluding those
> projects from the community would have just meant giving up from the
> beginning.

You have to have motivation from vendors in order to achieve said single
way of installing OpenStack. I gave up a long time ago on distros and
vendors to get behind such an effort.

Where vendors see $$$, they will attempt to carve out value
differentiation. And value differentiation leads to, well, differences,
naturally.

And, despite what some might misguidedly think, Kubernetes has no single

Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-07-02 Thread Fox, Kevin M
I think Keystone is one of the exceptions currently, as it is the 
quintessential common service in all of OpenStack since the rule was made, all 
things auth belong to Keystone and the other projects don't waver from it. The 
same can not be said of, say, Barbican. Steps have been made recently to get 
farther down that path, but still is not there yet. Until it is blessed as a 
common, required component, other silo's are still disincentivized to depend on 
it.

I think a lot of the pushback around not adding more common/required services 
is the extra load it puts on ops though. hence these:
>  * Consider abolishing the project walls.
>  * simplify the architecture for ops

IMO, those need to change to break free from the pushback and make progress on 
the commons again.

Thanks,
Kevin

From: Lance Bragstad [lbrags...@gmail.com]
Sent: Monday, July 02, 2018 11:41 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

On 06/28/2018 02:09 PM, Fox, Kevin M wrote:
> I'll weigh in a bit with my operator hat on as recent experience it pertains 
> to the current conversation
>
> Kubernetes has largely succeeded in common distribution tools where OpenStack 
> has not been able to.
> kubeadm was created as a way to centralize deployment best practices, config, 
> and upgrade stuff into a common code based that other deployment tools can 
> build on.
>
> I think this has been successful for a few reasons:
>  * kubernetes followed a philosophy of using k8s to deploy/enhance k8s. 
> (Eating its own dogfood)
>  * was willing to make their api robust enough to handle that self 
> enhancement. (secrets are a thing, orchestration is not optional, etc)
>  * they decided to produce a reference product (very important to adoption 
> IMO. You don't have to "build from source" to kick the tires.)
>  * made the barrier to testing/development as low as 'curl 
> http://..minikube; minikube start' (this spurs adoption and contribution)
>  * not having large silo's in deployment projects allowed better 
> communication on common tooling.
>  * Operator focused architecture, not project based architecture. This 
> simplifies the deployment situation greatly.
>  * try whenever possible to focus on just the commons and push vendor 
> specific needs to plugins so vendors can deal with vendor issues directly and 
> not corrupt the core.
>
> I've upgraded many OpenStacks since Essex and usually it is multiple weeks of 
> prep, and a 1-2 day outage to perform the deed. about 50% of the upgrades, 
> something breaks only on the production system and needs hot patching on the 
> spot. About 10% of the time, I've had to write the patch personally.
>
> I had to upgrade a k8s cluster yesterday from 1.9.6 to 1.10.5. For 
> comparison, what did I have to do? A couple hours of looking at release notes 
> and trying to dig up examples of where things broke for others. Nothing 
> popped up. Then:
>
> on the controller, I ran:
> yum install -y kubeadm #get the newest kubeadm
> kubeadm upgrade plan #check things out
>
> It told me I had 2 choices. I could:
>  * kubeadm upgrade v1.9.8
>  * kubeadm upgrade v1.10.5
>
> I ran:
> kubeadm upgrade v1.10.5
>
> The control plane was down for under 60 seconds and then the cluster was 
> upgraded. The rest of the services did a rolling upgrade live and took a few 
> more minutes.
>
> I can take my time to upgrade kubelets as mixed kubelet versions works well.
>
> Upgrading kubelet is about as easy.
>
> Done.
>
> There's a lot of things to learn from the governance / architecture of 
> Kubernetes..
>
> Fundamentally, there isn't huge differences in what Kubernetes and OpenStack 
> tries to provide users. Scheduling a VM or a Container via an api with some 
> kind of networking and storage is the same kind of thing in either case.
>
> The how to get the software (openstack or k8s) running is about as polar 
> opposite you can get though.
>
> I think if OpenStack wants to gain back some of the steam it had before, it 
> needs to adjust to the new world it is living in. This means:
>  * Consider abolishing the project walls. They are driving bad architecture 
> (not intentionally but as a side affect of structure)
>  * focus on the commons first.

Nearly all the work we're been doing from an identity perspective over
the last 18 months has enabled or directly improved the commons (or what
I would consider the commons). I agree that it's important, but we're
already focusing on it to the point where we're out of bandwidth.

Is the problem that it doesn't appear that way? Do we have different
ideas of what the "commons" are?

>  * simplify the architecture for ops:
>* make 

Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-06-28 Thread Fox, Kevin M
I'll weigh in a bit with my operator hat on as recent experience it pertains to 
the current conversation

Kubernetes has largely succeeded in common distribution tools where OpenStack 
has not been able to.
kubeadm was created as a way to centralize deployment best practices, config, 
and upgrade stuff into a common code based that other deployment tools can 
build on.

I think this has been successful for a few reasons:
 * kubernetes followed a philosophy of using k8s to deploy/enhance k8s. (Eating 
its own dogfood)
 * was willing to make their api robust enough to handle that self enhancement. 
(secrets are a thing, orchestration is not optional, etc)
 * they decided to produce a reference product (very important to adoption IMO. 
You don't have to "build from source" to kick the tires.)
 * made the barrier to testing/development as low as 'curl 
http://..minikube; minikube start' (this spurs adoption and contribution)
 * not having large silo's in deployment projects allowed better communication 
on common tooling.
 * Operator focused architecture, not project based architecture. This 
simplifies the deployment situation greatly.
 * try whenever possible to focus on just the commons and push vendor specific 
needs to plugins so vendors can deal with vendor issues directly and not 
corrupt the core.

I've upgraded many OpenStacks since Essex and usually it is multiple weeks of 
prep, and a 1-2 day outage to perform the deed. about 50% of the upgrades, 
something breaks only on the production system and needs hot patching on the 
spot. About 10% of the time, I've had to write the patch personally.

I had to upgrade a k8s cluster yesterday from 1.9.6 to 1.10.5. For comparison, 
what did I have to do? A couple hours of looking at release notes and trying to 
dig up examples of where things broke for others. Nothing popped up. Then:

on the controller, I ran:
yum install -y kubeadm #get the newest kubeadm
kubeadm upgrade plan #check things out

It told me I had 2 choices. I could:
 * kubeadm upgrade v1.9.8
 * kubeadm upgrade v1.10.5

I ran:
kubeadm upgrade v1.10.5

The control plane was down for under 60 seconds and then the cluster was 
upgraded. The rest of the services did a rolling upgrade live and took a few 
more minutes.

I can take my time to upgrade kubelets as mixed kubelet versions works well.

Upgrading kubelet is about as easy.

Done.

There's a lot of things to learn from the governance / architecture of 
Kubernetes..

Fundamentally, there isn't huge differences in what Kubernetes and OpenStack 
tries to provide users. Scheduling a VM or a Container via an api with some 
kind of networking and storage is the same kind of thing in either case.

The how to get the software (openstack or k8s) running is about as polar 
opposite you can get though.

I think if OpenStack wants to gain back some of the steam it had before, it 
needs to adjust to the new world it is living in. This means:
 * Consider abolishing the project walls. They are driving bad architecture 
(not intentionally but as a side affect of structure)
 * focus on the commons first.
 * simplify the architecture for ops:
   * make as much as possible stateless and centralize remaining state.
   * stop moving config options around with every release. Make it promote 
automatically and persist it somewhere.
   * improve serial performance before sharding. k8s can do 5000 nodes on one 
control plane. No reason to do nova cells and make ops deal with it except for 
the most huge of clouds
 * consider a reference product (think Linux vanilla kernel. distro's can 
provide their own variants. thats ok)
 * come up with an architecture team for the whole, not the subsystem. The 
whole thing needs to work well.
 * encourage current OpenStack devs to test/deploy Kubernetes. It has some very 
good ideas that OpenStack could benefit from. If you don't know what they are, 
you can't adopt them.

And I know its hard to talk about, but consider just adopting k8s as the 
commons and build on top of it. OpenStack's api's are good. The implementations 
right now are very very heavy for ops. You could tie in K8s's pod scheduler 
with vm stuff running in containers and get a vastly simpler architecture for 
operators to deal with. Yes, this would be a major disruptive change to 
OpenStack. But long term, I think it would make for a much healthier OpenStack.

Thanks,
Kevin

From: Zane Bitter [zbit...@redhat.com]
Sent: Wednesday, June 27, 2018 4:23 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

On 27/06/18 07:55, Jay Pipes wrote:
> WARNING:
>
> Danger, Will Robinson! Strong opinions ahead!

I'd have been disappointed with anything less :)

> On 06/26/2018 10:00 PM, Zane Bitter wrote:
>> On 26/06/18 09:12, Jay Pipes wrote:
>>> Is (one of) the problem(s) with our community that we have too small
>>> of a scope/footprint? No. Not in the slightest.
>>
>> 

Re: [openstack-dev] [tc] [all] TC Report 18-26

2018-06-26 Thread Fox, Kevin M
"What is OpenStack" 

From: Jay Pipes [jaypi...@gmail.com]
Sent: Tuesday, June 26, 2018 6:12 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] [all] TC Report 18-26

On 06/26/2018 08:41 AM, Chris Dent wrote:
> Meanwhile, to continue [last week's theme](/tc-report-18-25.html),
> the TC's role as listener, mediator, and influencer lacks
> definition.
>
> Zane wrote up a blog post explaining the various ways in which the
> OpenStack Foundation is
> [expanding](https://www.zerobanana.com/archive/2018/06/14#osf-expansion).

One has to wonder with 4 "focus areas" for the OpenStack Foundation [1]
whether there is any actual expectation that there will be any focus at
all any more.

Are CI/CD and secure containers important? [2] Yes, absolutely.

Is (one of) the problem(s) with our community that we have too small of
a scope/footprint? No. Not in the slightest.

IMHO, what we need is focus. And having 4 different focus areas doesn't
help focus things.

I keep waiting for people to say "no, that isn't part of our scope". But
all I see is people saying "yes, we will expand our scope to these new
sets of things (otherwise *gasp* the Linux Foundation will gobble up all
the hype)".

Just my two cents and sorry for being opinionated,
-jay

[1] https://www.openstack.org/foundation/strategic-focus-areas/

[2] I don't include "edge" in my list of things that are important
considering nobody even knows what "edge" is yet. I fail to see how
people can possibly "focus" on something that isn't defined.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Organizational diversity tag

2018-06-05 Thread Fox, Kevin M
That might not be a good idea. That may just push the problem underground as 
people are afraid to speak up publicly.

Perhaps an anonymous poll kind of thing, so that it can be counted publicly but 
doesn't cause people to fear retaliation?

Thanks,
Kevin

From: Doug Hellmann [d...@doughellmann.com]
Sent: Tuesday, June 05, 2018 7:39 AM
To: openstack-dev
Subject: Re: [openstack-dev] [tc] Organizational diversity tag

Excerpts from Doug Hellmann's message of 2018-06-02 15:08:28 -0400:
> Excerpts from Jeremy Stanley's message of 2018-06-02 18:51:47 +:
> > On 2018-06-02 13:23:24 -0400 (-0400), Doug Hellmann wrote:
> > [...]
> > > It feels like we would be saying that we don't trust 2 core reviewers
> > > from the same company to put the project's goals or priorities over
> > > their employer's.  And that doesn't feel like an assumption I would
> > > want us to encourage through a tag meant to show the health of the
> > > project.
> > [...]
> >
> > That's one way of putting it. On the other hand, if we ostensibly
> > have that sort of guideline (say, two core reviewers shouldn't be
> > the only ones to review a change submitted by someone else from
> > their same organization if the team is large and diverse enough to
> > support such a pattern) then it gives our reviewers a better
> > argument to push back on their management _if_ they're being
> > strongly urged to review/approve certain patches. At least then they
> > can say, "this really isn't going to fly because we have to get a
> > reviewer from another organization to agree it's in the best
> > interests of the project" rather than "fire me if you want but I'm
> > not approving that change, no matter how much your product launch is
> > going to be delayed."
>
> Do we have that problem? I honestly don't know how much pressure other
> folks are feeling. My impression is that we've mostly become good at
> finding the necessary compromises, but my experience doesn't cover all
> of our teams.

To all of the people who have replied to me privately that they have
experienced this problem:

We can't really start to address it until it's out here in the open.
Please post to the list.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] A culture change (nitpicking)

2018-05-30 Thread Fox, Kevin M
To play devils advocate and as someone that has had to git bisect an ugly 
regression once I still think its important not to break trunk. It can be much 
harder to deal with difficult issues like that if trunk frequently breaks.

Thanks,
Kevin

From: Sean McGinnis [sean.mcgin...@gmx.com]
Sent: Wednesday, May 30, 2018 5:01 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc][all] A culture change (nitpicking)

> "master should be always deployable and fully backward compatible and
> so we cant let anything in anytime that could possibly regress anyone"
>
> Should we change that attitude too? Anyone agree? disagree?
>
> Thanks,
> Dims
>
I'll definitely jump at this one.

I've always thought (and shared on the ML several times now) that our
implied
but not explicit support for CD from any random commit was a bad thing.

While I think it's good to support the idea that master is always
deployable, I
do not think it is a good mindset to think that every commit is a
"release" and
therefore should be supported until the end of time. We have a coordinated
release for a reason, and I think design decisions and fixes should be
based on
the assumption that a release is a release and the point at which we
need to be
cognizant and caring about keeping backward compatibility. Doing that for
every single commit is not ideal for the overall health of the product, IMO.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [OpenStack-Operators][OpenStack] Regarding production grade OpenStack deployment

2018-05-18 Thread Fox, Kevin M
I don't think openstack itself can meet full zero downtime requirements. But if 
it can, then I also think none of the deployment tools try and support that use 
case either.

Thanks,
Kevin

From: Amit Kumar [ebiib...@gmail.com]
Sent: Friday, May 18, 2018 3:46 AM
To: OpenStack Operators; Openstack
Subject: [Openstack-operators] [OpenStack-Operators][OpenStack] Regarding 
production grade OpenStack deployment

Hi All,

We want to deploy our private cloud using OpenStack as highly available (zero 
downtime (ZDT) - in normal course of action and during upgrades as well) 
production grade environment. We came across following tools.


  *   We thought of using Kolla-Kubernetes as deployment tool, but we got 
feedback from Kolla IRC channel that this project is being retired. Moreover, 
we couldn't find latest documents having multi-node deployment steps and, High 
Availability support was also not mentioned at all anywhere in the 
documentation.
  *   Another option to have Kubernetes based deployment is to use 
OpenStack-Helm, but it seems the OSH community has not made OSH 1.0 officially 
available yet.
  *   Last option, is to use Kolla-Ansible, although it is not a Kubernetes 
deployment, but seems to have good community support around it. Also, its 
documentation talks a little about production grade deployment, probably it is 
being used in production grade environments.

If you folks have used any of these tools for deploying OpenStack to fulfill 
these requirements: HA and ZDT, then please provide your inputs specifically 
about HA and ZDT support of the deployment tool, based on your experience. And 
please share if you have any reference links that you have used for achieving 
HA and ZDT for the respective tools.

Lastly, if you think we should think that we have missed another more viable 
and stable options of deployment tools which can serve our requirement: HA and 
ZDT, then please do suggest the same.

Regards,
Amit


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [all] Topics for the Board+TC+UC meeting in Vancouver

2018-05-11 Thread Fox, Kevin M
Who are your users, what do they need, are you meeting those needs, and what 
can you do to better things?

If that can't be answered, how do you know if you are making progress or 
staying relevant?

Lines of code committed is not a metric of real progress.
Number of reviews isn't.
Feature addition metrics aren't necessarily if the features are not relevant.
Developer community size is not really a metric of progress either. (not a bad 
thing. just doesn't grantee progress if devs are going in different directions)

If you can't answer them, how do separate things like, "devs are leaving 
because the project is mature, from the overall project is really broken and 
folks are just leaving?"

Part of the disconnect to me has been that these questions have been left up to 
the projects by and large. But, users don't use the projects. Users use 
OpenStack. Or, moving forward, they at least use a Constellation. But 
Constellation is still just a documentation construct. Not really a first class 
entity.

Currently the isolation between the Projects and the thing that the users use, 
the Constellation allows for user needs to easily slip through the cracks. 
Cause "Project X: we agree that is a problem, but its Y projects problem. 
Project Y: we agree that is a problem, but its X projects problem." No, 
seriously, its OpenStacks problem. Most of the major issues I've hit in my many 
years of using OpenStack were in that category. And there wasn't a good forum 
for addressing them.

A related effect of the isolation is also that the projects don't work on the 
commons nor look around too much what others are doing. Either within OpenStack 
or outside. They solve problems at the project level and say, look, I've solved 
it, but don't look at what happens when all the projects do that independently 
and push more work to the users. The end result of this lack of Leadership is 
more work for the users compared to competitors.

IMO, OpenStack really needs some Leadership at a higher level. It seems to be 
lacking some things:
1. A group that performs... lacking a good word reconnaissance? How is 
OpenStack fairing in the world. How is the world changing and how must 
OpenStack change to continue to be relevant. If you don't know you have a 
problem you can't correct it.
2. A group that decides some difficult political things, like who the users 
are. Maybe at a per constellation level. This does not mean rejecting use cases 
from "non users". just helping the projects sort out priorities.
3. A group that decides on a general direction for OpenStack's technical 
solutions, encourages building up the commons, helps break down the project 
communication walls and picks homes for features when it takes too long for a 
user need to be met (users really don't care what OpenStack project does what 
feature. They just that they are suffering, things don't get addressed in a 
timely manner, and will maybe consider looking outside of OpenStack for a 
solution)

The current governance structure is focused on hoping the individual projects 
will look at the big picture and adjust to it, and commit the relevant common 
code to the commons rather then one-offing a solution and discussing solutions 
between projects to gain consensus. But that's generally not happening. The 
projects have a narrow view of the world and just wanna make progress on their 
code. I get that. The other bits are hard. Guidance to the projects on how they 
are are, or are not fitting, would help them make better choices and better 
code.

The focus so much on projects has made us loose sight of why they exist. To 
serve the Users. Users don't use projects as OpenStack has defined them though. 
And we can't even really define what a user is. This is a big problem.

Anyway, more Leadership please! Ready. GO! :)

Thanks,
Kevin


From: Jay Pipes [jaypi...@gmail.com]
Sent: Friday, May 11, 2018 9:31 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] Topics for the Board+TC+UC meeting in 
Vancouver

On 05/11/2018 12:21 PM, Zane Bitter wrote:
> On 11/05/18 11:46, Jay Pipes wrote:
>> On 05/10/2018 08:12 PM, Zane Bitter wrote:
>>> On 10/05/18 16:45, Matt Riedemann wrote:
 On 5/10/2018 3:38 PM, Zane Bitter wrote:
> How can we avoid (or get out of) the local maximum trap and ensure
> that OpenStack will meet the needs of all the users we want to
> serve, not just those whose needs are similar to those of the users
> we already have?

 The phrase "jack of all trades, master of none" comes to mind here.
>>>
>>> Stipulating the constraint that you can't please everybody, how do
>>> you ensure that you're meeting the needs of the users who are most
>>> important to the long-term sustainability of the project, and not
>>> just the ones who were easiest to bootstrap?
>>
>> Who gets to decide who the users are "that are most important to the
>> long-term sustainability of 

Re: [openstack-dev] [api] REST limitations and GraghQL inception?

2018-05-03 Thread Fox, Kevin M
k8s does that I think by separating desired state from actual state and working 
to bring the two inline. the same could (maybe even should) be done to 
openstack. But your right, that is not a small amount of work.

Even without using GraphQL, Making the api more declarative anyway, has 
advantages.

Thanks,
Kevin

From: Jay Pipes [jaypi...@gmail.com]
Sent: Thursday, May 03, 2018 10:50 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [api] REST limitations and GraghQL inception?

On 05/03/2018 12:57 PM, Ed Leafe wrote:
> On May 2, 2018, at 2:40 AM, Gilles Dubreuil  wrote:
>>
>>> • We should get a common consensus before all projects start to implement 
>>> it.
>>
>> This is going to be raised during the API SIG weekly meeting later this week.
>> API developers (at least one) from every project are strongly welcomed to 
>> participate.
>> I suppose it makes sense for the API SIG to be the place to discuss it, at 
>> least initially.
>
> It was indeed discussed, and we think that it would be a worthwhile 
> experiment. But it would be a difficult, if not impossible, proposal to have 
> adopted OpenStack-wide without some data to back it up. So what we thought 
> would be a good starting point would be to have a group of individuals 
> interested in GraphQL form an informal team and proceed to wrap one OpenStack 
> API as a proof-of-concept. Monty Taylor suggested Neutron as an excellent 
> candidate, as its API exposes things at an individual table level, requiring 
> the client to join that information to get the answers they need.
>
> Once that is done, we could examine the results, and use them as the basis 
> for proceeding with something more comprehensive. Does that sound like a good 
> approach to (all of) you?

Did anyone bring up the differences between control plane APIs and data
APIs and the applicability of GraphQL to the latter and not the former?

For example, a control plane API to reboot a server instance looks like
this:

POST /servers/{uuid}/action
{
 "reboot" : {
 "type" : "HARD"
 }
}

how does that map to GraphQL? Via GraphQL's "mutations" [0]? That
doesn't really work since the server object isn't being mutated. I mean,
the state of the server will *eventually* be mutated when the reboot
action starts kicking in (the above is an async operation returning a
202 Accepted). But the act of hitting POST /servers/{uuid}/action
doesn't actually mutate the server's state.

This is just one example of where GraphQL doesn't necessarily map well
to control plane APIs that happen to be built on top of REST/HTTP [1]

Bottom line for me would be what is the perceivable benefit that all of
our users would receive given the (very costly) overhaul of our APIs
that would likely be required.

Best,
-jay

[0] http://graphql.org/learn/queries/#mutations
[1] One could argue (and I have in the past) that POST
/servers/{uuid}/action isn't a RESTful interface at all...

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier?

2018-04-24 Thread Fox, Kevin M
I support 12 factor. But 12 factor only works if you can commit to always 
deploying on top of 12 factor tools. If OpenStack committed to only ever 
deploying api services on k8s then my answer might be different. but so far has 
been unable to do that. Barring that, I think simplifying the operators life so 
you get more users/contributors has priority over pure 12 factor ideals.

It also is about getting Project folks working together to see how their parts 
fit (or not) in the greater constilation. Just writing a document on how you 
could fit things together doesn't show the kinds of suffering that actually 
integrating it into a finished whole could show.

Either way though, I think a unified db-sync would go a long way to making 
OpenStack easier to maintain as an Operator.

Thanks,
Kevin

From: Jay Pipes [jaypi...@gmail.com]
Sent: Tuesday, April 24, 2018 9:13 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] campaign question: How can we make 
contributing to OpenStack easier?

On 04/24/2018 12:04 PM, Fox, Kevin M wrote:
> Could the major components, nova-api, neutron-server, glance-apiserver, etc 
> be built in a way to have 1 process for all of them, and combine the upgrade 
> steps such that there is also, one db-sync for the entire constellation?

So, basically the exact opposite of the 12-factor app design that
"cloud-native" people espouse?

-jay

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier?

2018-04-24 Thread Fox, Kevin M
Yeah, I agree k8s seems to have hit on a good model where interests are 
separately grouped from the code bases. This has allowed architecture to not to 
be too heavily influenced by the logical groups interest.

I'll go ahead and propose it again since its been a little while. In order to 
start breaking down the barriers between Projects and start working more 
towards integration, should the TC come up with an architecture group? Get 
folks from all the major projects involved in it and sharing common 
infrastructure.

One possible pie in the sky goal of that group could be the following:

k8s has many controllers. But they compile almost all of them into one service. 
the kube-apiserver. Architecturally they could break them out at any point, but 
so far they have been able to scale just fine without doing so. Having them 
combined has allowed much easier upgrade paths for users though. This has 
spurred adoption and contribution. Adding a new controller isn't a huge lift to 
an operator. they just upgrade to the newest version which has the new 
controller built in.

Could the major components, nova-api, neutron-server, glance-apiserver, etc be 
built in a way to have 1 process for all of them, and combine the upgrade steps 
such that there is also, one db-sync for the entire constellation?

The idea would be to take Constellations idea one step farther. That the 
Projects would deliver python libraries(and a binary for stand alone 
operation). Constilations would actually provide a code deliverable, not just 
reference architecture, combining the libraries together into a single usable 
entity. Operators most likely would consume the Constilations version rather 
then the individual Project versions.

What do you think?

Thanks,
Kevin


From: Thierry Carrez [thie...@openstack.org]
Sent: Tuesday, April 24, 2018 3:24 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] campaign question: How can we make 
contributing to OpenStack easier?

Fox, Kevin M wrote:
> OpenStack has created artificial walls between the various Projects. It shows 
> up, for example, as holes in usability at a user level or extra difficulty 
> for operators juggling around so many projects. Users and for the most part, 
> Operators don't really care about project organization, or ptls, or cores or 
> such.  OpenStack has made some progress this direction with stuff like the 
> unified cli. But OpenStack is not very unified.

I've been giving this some thought (in the context of a presentation I
was giving on hard lessons learned from 8 years of OpenStack). I think
that organizing development around project teams and components was the
best way to cope with the growth of OpenStack in 2011-1015 and get to a
working set of components. However it's not the best organization to
improve on the overall "product experience", or for a maintenance phase.

While it can be confusing, I like the two-dimensional approach that
Kubernetes followed (code ownership in one dimension, SIGs in the
other). The introduction of SIGs in OpenStack, beyond providing a way to
build closer feedback loops around specific topics, can help us tackle
this "unified experience" problem you raised. The formation of the
upgrades SIG, or the self-healing SIG is a sign that times change. Maybe
we need to push in that direction even more aggressively and start
thinking about de-emphasizing project teams themselves.

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier?

2018-04-23 Thread Fox, Kevin M
One more I'll add which is touched on a little below. Contributors spawn from a 
healthy Userbase/Operatorbase. If their needs are not met, then they go 
elsewhere and the contributor base shrinks. OpenStack has created artificial 
walls between the various Projects. It shows up, for example, as holes in 
usability at a user level or extra difficulty for operators juggling around so 
many projects. Users and for the most part, Operators don't really care about 
project organization, or ptls, or cores or such.  OpenStack has made some 
progress this direction with stuff like the unified cli. But OpenStack is not 
very unified. I think OpenStack, as a whole, needs to look at ways to minimize 
how its archetecture impacts Users/Operators so they don't continue to migrate 
to platforms that do minimize the stuff they have the operator/user deal with. 
One goes to a cloud so you don't have to deal so much with the details.

Thanks,
Kevin

_
___
From: Zane Bitter [zbit...@redhat.com]
Sent: Monday, April 23, 2018 1:47 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tc] campaign question: How can we make 
contributing to OpenStack easier?

On 23/04/18 10:06, Doug Hellmann wrote:
> [This is meant to be one of (I hope) several conversation-provoking
> questions directed at prospective TC members to help the community
> understand their positions before considering how to vote in the
> ongoing election.]
>
> Over the last year we have seen some contraction in the number of
> companies and individuals contributing to OpenStack. At the same
> time we have started seeing contributions from other companies and
> individuals. To some degree this contraction and shift in contributor
> base is a natural outcome of changes in OpenStack itself along with
> the rest of the technology industry, but as with any change it
> raises questions about how and whether we can ensure a smooth
> transition to a new steady state.
>
> What aspects of our policies or culture make contributing to OpenStack
> more difficult than contributing to other open source projects?
>
> Which of those would you change, and how?

There's probably two separate groups we need to consider. The first is
operators and users of OpenStack. We want those folks to contribute when
they see a problem or an opportunity to improve, and their feedback is
extremely valuable because they know the product best. We need to
encourage new contributors in this group and retain existing ones by:

* Reducing barriers to contributing, like having to register for
multiple services, sign a CLA  We're mostly aware of the problems in
this area and have been making incremental progress on them over a long
period of time.

* Encouraging people to get involved. Low-hanging-fruit bug lists are
useful. Even something like a link on every docs page indicating where
to edit the source would help encourage people to take that first step.
(Technically we have this when you click the 'bug' link - but it's not
obvious, and you need to sign up for a Launchpad account to use it...
see above.) Once people have done the initial setup work for a first
patch, they're more likely to contribute again. The First Contact SIG is
doing great work in this area.

* The most important one: provide prompt, actionable feedback on
changes. Nothing kills contributor motivation like having your changes
ignored for months. Unfortunately this is also the hardest one to deal
with; the situation is different in every project, and much depends on
the amount of time available from the existing contributors. Adding more
core reviewers helps; finding ways to limit the proportion of the code
base that a core reviewer is responsible for (either by splitting up
repos or giving cores a specific area of responsibility in a repo) would
be one way to train them quicker.

Another way, which I already alluded to in my candidacy message, is to
expand the pool of OpenStack users. One of my goals is to make OpenStack
an attractive cloud platform to write applications against, and not
merely somewhere to get a VM to run your application in. If we can
achieve that we'll increase the market for OpenStack and hence the
number of users and thus potential contributors. But those new users
would be more motivated than anyone to find and fix bugs, and they're
already developers so they'd be disproportionately more likely to
contribute code in addition to documentation or bug reports (which are
also important contributions).


The second group is those who are paid specifically to spend a portion
of their time on upstream contribution, which brings us to...

> Where else should we be looking for contributors?

Companies who are making money from OpenStack! It's their responsibility
to maintain the commons and, collectively speaking at least, their
problem if they don't.

For a start, we need to convince anybody who is maintaining a fork of
OpenStack to do something more useful with 

Re: [openstack-dev] [k8s][octavia][lbaas] Experiences on using the LB APIs with K8s

2018-03-16 Thread Fox, Kevin M
What about the other way around? An Octavia plugin that simply manages k8s 
Ingress objects on a k8s cluster? Depending on how operators are deploying 
openstack, this might be a much easier way to deploy Octavia.

Thanks,
Kevin

From: Lingxian Kong [anlin.k...@gmail.com]
Sent: Friday, March 16, 2018 5:21 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [k8s][octavia][lbaas] Experiences on using the LB 
APIs with K8s

Just FYI, l7 policy/rule support for Neutron LBaaS V2 and Octavia is on its 
way[1], because we will have both octavia and magnum deployed on our openstack 
based public cloud this year, an ingress controller for openstack(octavia) is 
also on our TODO list, any kind of collaboration are welcomed :-)

[1]: https://github.com/gophercloud/gophercloud/pull/833


Cheers,
Lingxian Kong (Larry)

On Fri, Mar 16, 2018 at 5:01 PM, Joe Topjian 
> wrote:
Hi Chris,

I wear a number of hats related to this discussion, so I'll add a few points of 
view :)

It turns out that with
Terraform, it's possible to tear down resources in a way that causes Neutron to
leak administrator-privileged resources that can not be deleted by a
non-privileged users. In discussions with the Neutron and Octavia teams, it was
strongly recommended that I move away from the Neutron LBaaSv2 API and instead
adopt Octavia. Vexxhost graciously installed Octavia and my request and I was
able to move past this issue.

Terraform hat! I want to slightly nit-pick this one since the words "leak" and 
"admin-priv" can sound scary: Terraform technically wasn't doing anything 
wrong. The problem was that Octavia was creating resources but not setting 
ownership to the tenant. When it came time to delete the resources, Octavia was 
correctly refusing, though it incorrectly created said resources.

>From reviewing the discussion, other parties were discovering this issue and 
>patching in parallel to your discovery. Both xgerman and Vexxhost jumped in to 
>confirm the behavior seen by Terraform. Vexxhost quickly applied the patch. It 
>was a really awesome collaboration between yourself, dims, xgerman, and 
>Vexxhost.

This highlights the first call to action for our public and private cloud
community: encouraging the rapid migration from older, unsupported APIs to
Octavia.

Operator hat! The clouds my team and I run are more compute-based. Our users 
would be more excited if we increased our GPU pool than enhanced the networking 
services. With that in mind, when I hear it said that "Octavia is 
backwards-compatible with Neutron LBaaS v2", I think "well, cool, that means we 
can keep running Neutron LBaaS v2 for now" and focus our efforts elsewhere.

I totally get why Octavia is advertised this way and it's very much 
appreciated. When I learned about Octavia, my knee-jerk reaction was "oh no, 
not another load balancer" but that was remedied when I learned it's more like 
LBaaSv2++. I'm sure we'll deploy Octavia some day, but it's not our primary 
focus and we can still squeak by with Neutron's LBaaS v2.

If you *really* wanted us to deploy Octavia ASAP, then a migration guide would 
be wonderful. I read over the "Developer / Operator Quick Start Guide" and 
found it very well written! I groaned over having to build an image but I also 
really appreciate the image builder script. If there can't be pre-built images 
available for testing, the second-best option is that script.

This highlights a second call to action for the SDK and provider developers:
recognizing the end of life of the Neutron LBaaSv2 API[4][5] and adding
support for more advanced Octavia features.

Gophercloud hat! We've supported Octavia for a few months now, but purely by 
having the load-balancer client piggyback off of the Neutron LBaaS v2 API. We 
made the decision this morning, coincidentally enough, to have Octavia be a 
first-class service peered with Neutron rather than think of Octavia as a 
Neutron/network child. This will allow Octavia to fully flourish without worry 
of affecting the existing LBaaS v2 API (which we'll still keep around 
separately).

Thanks,
Joe

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-14 Thread Fox, Kevin M
I can think of a few ideas, though some sound painful on paper Not really 
recommending anything, just thinking out loud...

One idea is that at the root of chaos monkey. If something is hard, do it 
frequently. If upgrading is hard, we need to be doing it constantly so the pain 
gets largely eliminated. One idea would be to discourage the use of standing up 
a fresh devstack all the time by devs and have them upgrade them instead. If 
its hard, then its likely someone will chip in to make it less hard.

Another is devstack in general. the tooling used by devs and that used by ops 
are so different as to isolate the devs from ops' pain. If they used more 
opsish tooling, then they would hit the same issues and would be more likely to 
find solutions that work for both parties.

A third one is supporting multiple version upgrades in the gate. I rarely have 
a problem with a cloud has a database one version back. I have seen lots of 
issues with databases that contain data back when the cloud was instantiated 
and then upgraded multiple times.

Another option is trying to unify/detangle the upgrade procedure. upgrading 
compute kit should be one or two commands if you can live with the defaults. 
Not weeks of poring through release notes, finding correct orders from pages of 
text and testing vigorously on test systems.

How about some tool that does the: dump database to somewhere temporary, 
iterate over all the upgrade job components, and see if it will successfully 
not corrupt your database. That takes a while to do manually. Ideally it could 
even upload stacktraces back a bug tracker for attention.

Thanks,
Kevin

From: Davanum Srinivas [dava...@gmail.com]
Sent: Tuesday, November 14, 2017 4:08 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: openstack-oper.
Subject: Re: [openstack-dev] [Openstack-operators] Upstream LTS Releases

On Wed, Nov 15, 2017 at 10:44 AM, John Dickinson <m...@not.mn> wrote:
>
>
> On 14 Nov 2017, at 15:18, Mathieu Gagné wrote:
>
>> On Tue, Nov 14, 2017 at 6:00 PM, Fox, Kevin M <kevin@pnnl.gov> wrote:
>>> The pressure for #2 comes from the inability to skip upgrades and the fact 
>>> that upgrades are hugely time consuming still.
>>>
>>> If you want to reduce the push for number #2 and help developers get their 
>>> wish of getting features into users hands sooner, the path to upgrade 
>>> really needs to be much less painful.
>>>
>>
>> +1000
>>
>> We are upgrading from Kilo to Mitaka. It took 1 year to plan and
>> execute the upgrade. (and we skipped a version)
>> Scheduling all the relevant internal teams is a monumental task
>> because we don't have dedicated teams for those projects and they have
>> other priorities.
>> Upgrading affects a LOT of our systems, some we don't fully have
>> control over. And it can takes months to get new deployment on those
>> systems. (and after, we have to test compatibility, of course)
>>
>> So I guess you can understand my frustration when I'm told to upgrade
>> more often and that skipping versions is discouraged/unsupported.
>> At the current pace, I'm just falling behind. I *need* to skip
>> versions to keep up.
>>
>> So for our next upgrades, we plan on skipping even more versions if
>> the database migration allows it. (except for Nova which is a huge
>> PITA to be honest due to CellsV1)
>> I just don't see any other ways to keep up otherwise.
>
> ?!?!
>
> What does it take for this to never happen again? No operator should need to 
> plan and execute an upgrade for a whole year to upgrade one year's worth of 
> code development.
>
> We don't need new policies, new teams, more releases, fewer releases, or 
> anything like that. The goal is NOT "let's have an LTS release". The goal 
> should be "How do we make sure Mattieu and everyone else in the world can 
> actually deploy and use the software we are writing?"
>
> Can we drop the entire LTS discussion for now and focus on "make upgrades 
> take less than a year" instead? After we solve that, let's come back around 
> to LTS versions, if needed. I know there's already some work around that. 
> Let's focus there and not be distracted about the best bureaucracy for not 
> deleting two-year-old branches.
>
>
> --John

John,

So... Any concrete ideas on how to achieve that?

Thanks,
Dims

>
>
> /me puts on asbestos pants
>
>>
>> --
>> Mathieu
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman

Re: [openstack-dev] [Openstack-operators] Upstream LTS Releases

2017-11-14 Thread Fox, Kevin M
I can think of a few ideas, though some sound painful on paper Not really 
recommending anything, just thinking out loud...

One idea is that at the root of chaos monkey. If something is hard, do it 
frequently. If upgrading is hard, we need to be doing it constantly so the pain 
gets largely eliminated. One idea would be to discourage the use of standing up 
a fresh devstack all the time by devs and have them upgrade them instead. If 
its hard, then its likely someone will chip in to make it less hard.

Another is devstack in general. the tooling used by devs and that used by ops 
are so different as to isolate the devs from ops' pain. If they used more 
opsish tooling, then they would hit the same issues and would be more likely to 
find solutions that work for both parties.

A third one is supporting multiple version upgrades in the gate. I rarely have 
a problem with a cloud has a database one version back. I have seen lots of 
issues with databases that contain data back when the cloud was instantiated 
and then upgraded multiple times.

Another option is trying to unify/detangle the upgrade procedure. upgrading 
compute kit should be one or two commands if you can live with the defaults. 
Not weeks of poring through release notes, finding correct orders from pages of 
text and testing vigorously on test systems.

How about some tool that does the: dump database to somewhere temporary, 
iterate over all the upgrade job components, and see if it will successfully 
not corrupt your database. That takes a while to do manually. Ideally it could 
even upload stacktraces back a bug tracker for attention.

Thanks,
Kevin

From: Davanum Srinivas [dava...@gmail.com]
Sent: Tuesday, November 14, 2017 4:08 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: openstack-oper.
Subject: Re: [openstack-dev] [Openstack-operators] Upstream LTS Releases

On Wed, Nov 15, 2017 at 10:44 AM, John Dickinson <m...@not.mn> wrote:
>
>
> On 14 Nov 2017, at 15:18, Mathieu Gagné wrote:
>
>> On Tue, Nov 14, 2017 at 6:00 PM, Fox, Kevin M <kevin@pnnl.gov> wrote:
>>> The pressure for #2 comes from the inability to skip upgrades and the fact 
>>> that upgrades are hugely time consuming still.
>>>
>>> If you want to reduce the push for number #2 and help developers get their 
>>> wish of getting features into users hands sooner, the path to upgrade 
>>> really needs to be much less painful.
>>>
>>
>> +1000
>>
>> We are upgrading from Kilo to Mitaka. It took 1 year to plan and
>> execute the upgrade. (and we skipped a version)
>> Scheduling all the relevant internal teams is a monumental task
>> because we don't have dedicated teams for those projects and they have
>> other priorities.
>> Upgrading affects a LOT of our systems, some we don't fully have
>> control over. And it can takes months to get new deployment on those
>> systems. (and after, we have to test compatibility, of course)
>>
>> So I guess you can understand my frustration when I'm told to upgrade
>> more often and that skipping versions is discouraged/unsupported.
>> At the current pace, I'm just falling behind. I *need* to skip
>> versions to keep up.
>>
>> So for our next upgrades, we plan on skipping even more versions if
>> the database migration allows it. (except for Nova which is a huge
>> PITA to be honest due to CellsV1)
>> I just don't see any other ways to keep up otherwise.
>
> ?!?!
>
> What does it take for this to never happen again? No operator should need to 
> plan and execute an upgrade for a whole year to upgrade one year's worth of 
> code development.
>
> We don't need new policies, new teams, more releases, fewer releases, or 
> anything like that. The goal is NOT "let's have an LTS release". The goal 
> should be "How do we make sure Mattieu and everyone else in the world can 
> actually deploy and use the software we are writing?"
>
> Can we drop the entire LTS discussion for now and focus on "make upgrades 
> take less than a year" instead? After we solve that, let's come back around 
> to LTS versions, if needed. I know there's already some work around that. 
> Let's focus there and not be distracted about the best bureaucracy for not 
> deleting two-year-old branches.
>
>
> --John

John,

So... Any concrete ideas on how to achieve that?

Thanks,
Dims

>
>
> /me puts on asbestos pants
>
>>
>> --
>> Mathieu
>>
>> ___
>> OpenStack-operators mailing list
>> openstack-operat...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman

Re: [Openstack-operators] [openstack-dev] Upstream LTS Releases

2017-11-14 Thread Fox, Kevin M
The pressure for #2 comes from the inability to skip upgrades and the fact that 
upgrades are hugely time consuming still.

If you want to reduce the push for number #2 and help developers get their wish 
of getting features into users hands sooner, the path to upgrade really needs 
to be much less painful.

Thanks,
Kevin

From: Erik McCormick [emccorm...@cirrusseven.com]
Sent: Tuesday, November 14, 2017 9:21 AM
To: Blair Bethwaite
Cc: OpenStack Development Mailing List (not for usage questions); 
openstack-oper.
Subject: Re: [openstack-dev] [Openstack-operators]  Upstream LTS Releases

On Tue, Nov 14, 2017 at 11:30 AM, Blair Bethwaite
 wrote:
> Hi all - please note this conversation has been split variously across
> -dev and -operators.
>
> One small observation from the discussion so far is that it seems as
> though there are two issues being discussed under the one banner:
> 1) maintain old releases for longer
> 2) do stable releases less frequently
>
> It would be interesting to understand if the people who want longer
> maintenance windows would be helped by #2.
>

I would like to hear from people who do *not* want #2 and why not.
What are the benefits of 6 months vs. 1 year? I have heard objections
in the hallway track, but I have struggled to retain the rationale for
more than 10 seconds. I think this may be more of a religious
discussion that could take a while though.

#1 is something we can act on right now with the eventual goal of
being able to skip releases entirely. We are addressing the
maintenance of the old issue right now. As we get farther down the
road of fast-forward upgrade tooling, then we will be able to please
those wishing for a slower upgrade cadence, and those that want to
stay on the bleeding edge simultaneously.

-Erik

> On 14 November 2017 at 09:25, Doug Hellmann  wrote:
>> Excerpts from Bogdan Dobrelya's message of 2017-11-14 17:08:31 +0100:
>>> >> The concept, in general, is to create a new set of cores from these
>>> >> groups, and use 3rd party CI to validate patches. There are lots of
>>> >> details to be worked out yet, but our amazing UC (User Committee) will
>>> >> be begin working out the details.
>>> >
>>> > What is the most worrying is the exact "take over" process. Does it mean 
>>> > that
>>> > the teams will give away the +2 power to a different team? Or will our 
>>> > (small)
>>> > stable teams still be responsible for landing changes? If so, will they 
>>> > have to
>>> > learn how to debug 3rd party CI jobs?
>>> >
>>> > Generally, I'm scared of both overloading the teams and losing the 
>>> > control over
>>> > quality at the same time :) Probably the final proposal will clarify it..
>>>
>>> The quality of backported fixes is expected to be a direct (and only?)
>>> interest of those new teams of new cores, coming from users and
>>> operators and vendors. The more parties to establish their 3rd party
>>
>> We have an unhealthy focus on "3rd party" jobs in this discussion. We
>> should not assume that they are needed or will be present. They may be,
>> but we shouldn't build policy around the assumption that they will. Why
>> would we have third-party jobs on an old branch that we don't have on
>> master, for instance?
>>
>>> checking jobs, the better proposed changes communicated, which directly
>>> affects the quality in the end. I also suppose, contributors from ops
>>> world will likely be only struggling to see things getting fixed, and
>>> not new features adopted by legacy deployments they're used to maintain.
>>> So in theory, this works and as a mainstream developer and maintainer,
>>> you need no to fear of losing control over LTS code :)
>>>
>>> Another question is how to not block all on each over, and not push
>>> contributors away when things are getting awry, jobs failing and merging
>>> is blocked for a long time, or there is no consensus reached in a code
>>> review. I propose the LTS policy to enforce CI jobs be non-voting, as a
>>> first step on that way, and giving every LTS team member a core rights
>>> maybe? Not sure if that works though.
>>
>> I'm not sure what change you're proposing for CI jobs and their voting
>> status. Do you mean we should make the jobs non-voting as soon as the
>> branch passes out of the stable support period?
>>
>> Regarding the review team, anyone on the review team for a branch
>> that goes out of stable support will need to have +2 rights in that
>> branch. Otherwise there's no point in saying that they're maintaining
>> the branch.
>>
>> Doug
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Cheers,
> ~Blairo
>
> ___
> 

Re: [openstack-dev] [Openstack-operators] Upstream LTS Releases

2017-11-14 Thread Fox, Kevin M
The pressure for #2 comes from the inability to skip upgrades and the fact that 
upgrades are hugely time consuming still.

If you want to reduce the push for number #2 and help developers get their wish 
of getting features into users hands sooner, the path to upgrade really needs 
to be much less painful.

Thanks,
Kevin

From: Erik McCormick [emccorm...@cirrusseven.com]
Sent: Tuesday, November 14, 2017 9:21 AM
To: Blair Bethwaite
Cc: OpenStack Development Mailing List (not for usage questions); 
openstack-oper.
Subject: Re: [openstack-dev] [Openstack-operators]  Upstream LTS Releases

On Tue, Nov 14, 2017 at 11:30 AM, Blair Bethwaite
 wrote:
> Hi all - please note this conversation has been split variously across
> -dev and -operators.
>
> One small observation from the discussion so far is that it seems as
> though there are two issues being discussed under the one banner:
> 1) maintain old releases for longer
> 2) do stable releases less frequently
>
> It would be interesting to understand if the people who want longer
> maintenance windows would be helped by #2.
>

I would like to hear from people who do *not* want #2 and why not.
What are the benefits of 6 months vs. 1 year? I have heard objections
in the hallway track, but I have struggled to retain the rationale for
more than 10 seconds. I think this may be more of a religious
discussion that could take a while though.

#1 is something we can act on right now with the eventual goal of
being able to skip releases entirely. We are addressing the
maintenance of the old issue right now. As we get farther down the
road of fast-forward upgrade tooling, then we will be able to please
those wishing for a slower upgrade cadence, and those that want to
stay on the bleeding edge simultaneously.

-Erik

> On 14 November 2017 at 09:25, Doug Hellmann  wrote:
>> Excerpts from Bogdan Dobrelya's message of 2017-11-14 17:08:31 +0100:
>>> >> The concept, in general, is to create a new set of cores from these
>>> >> groups, and use 3rd party CI to validate patches. There are lots of
>>> >> details to be worked out yet, but our amazing UC (User Committee) will
>>> >> be begin working out the details.
>>> >
>>> > What is the most worrying is the exact "take over" process. Does it mean 
>>> > that
>>> > the teams will give away the +2 power to a different team? Or will our 
>>> > (small)
>>> > stable teams still be responsible for landing changes? If so, will they 
>>> > have to
>>> > learn how to debug 3rd party CI jobs?
>>> >
>>> > Generally, I'm scared of both overloading the teams and losing the 
>>> > control over
>>> > quality at the same time :) Probably the final proposal will clarify it..
>>>
>>> The quality of backported fixes is expected to be a direct (and only?)
>>> interest of those new teams of new cores, coming from users and
>>> operators and vendors. The more parties to establish their 3rd party
>>
>> We have an unhealthy focus on "3rd party" jobs in this discussion. We
>> should not assume that they are needed or will be present. They may be,
>> but we shouldn't build policy around the assumption that they will. Why
>> would we have third-party jobs on an old branch that we don't have on
>> master, for instance?
>>
>>> checking jobs, the better proposed changes communicated, which directly
>>> affects the quality in the end. I also suppose, contributors from ops
>>> world will likely be only struggling to see things getting fixed, and
>>> not new features adopted by legacy deployments they're used to maintain.
>>> So in theory, this works and as a mainstream developer and maintainer,
>>> you need no to fear of losing control over LTS code :)
>>>
>>> Another question is how to not block all on each over, and not push
>>> contributors away when things are getting awry, jobs failing and merging
>>> is blocked for a long time, or there is no consensus reached in a code
>>> review. I propose the LTS policy to enforce CI jobs be non-voting, as a
>>> first step on that way, and giving every LTS team member a core rights
>>> maybe? Not sure if that works though.
>>
>> I'm not sure what change you're proposing for CI jobs and their voting
>> status. Do you mean we should make the jobs non-voting as soon as the
>> branch passes out of the stable support period?
>>
>> Regarding the review team, anyone on the review team for a branch
>> that goes out of stable support will need to have +2 rights in that
>> branch. Otherwise there's no point in saying that they're maintaining
>> the branch.
>>
>> Doug
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> --
> Cheers,
> ~Blairo
>
> ___
> 

Re: [openstack-dev] Logging format: let's discuss a bit about default format, format configuration and so on

2017-11-03 Thread Fox, Kevin M
+1

From: Juan Antonio Osorio [jaosor...@gmail.com]
Sent: Friday, November 03, 2017 3:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Logging format: let's discuss a bit about default 
format, format configuration and so on



On 3 Nov 2017 19:59, "Doug Hellmann" 
> wrote:
Excerpts from Cédric Jeanneret's message of 2017-11-01 14:54:34 +0100:
> Dear Stackers,
>
> While working on my locally deployed Openstack (Pike using TripleO), I
> was a bit struggling with the logging part. Currently, all logs are
> pushed to per-service files, in the standard format "one line per
> entry", plain flat text.
>
> It's nice, but if one is wanting to push and index those logs in an ELK,
> the current, default format isn't really good.
>
> After some discussions about oslo.log, it appears it provides a nice
> JSONFormatter handler¹ one might want to use for each (python) service
> using oslo central library.
>
> A JSON format is really cool, as it's easy to parse for machines, and it
> can be on a multi-line without any bit issue - this is especially
> important for stack traces, as their output is multi-line without real
> way to have a common delimiter so that we can re-format it and feed it
> to any log parser (logstash, fluentd, …).
>
> After some more talks, olso.log will not provide a unified interface in
> order to output all received logs as JSON - this makes sens, as that
> would mean "rewrite almost all the python logging management
> interface"², and that's pretty useless, since (all?) services have their
> own "logging.conf" file.
>
> That said… to the main purpose of that mail:
>
> - Default format for logs
> A first question would be "are we all OK with the default output format"
> - I'm pretty sure "humans" are happy with that, as it's really
> convenient to read and grep. But on a "standard" Openstack deploy, I'm
> pretty sure one does not have only one controller, one ceph node and one
> compute. Hence comes the log centralization, and with that, the log
> indexation and treatments.
>
> For that, one might argue "I'm using plain files on my logger, and
> grep-ing -r in them". That's a way to do things, and for that, plain,
> flat logs are great.
>
> But… I'm pretty sure I'm not the only one wanting to use some kind of
> ELK cluster for that kind of purpose. So the right question is: what
> about switching the default log format to JSON? On my part, I don't see
> "cons", only "pros", but my judgment is of course biased, as I'm "alone
> in my corner". But what about you, Community?
>
> - Provide a way to configure the output format/handler
> While poking around in the puppet modules code, I didn't find any way to
> set the output handler for the logs. For example, in puppet-nova³ we can
> set a lot of things, but not the useful handler for the output.
>
> It would be really cool to get, for each puppet module, the capability
> to set the handler so that one can just push some stuff in hiera, and
> Voilà, we have JSON logs.
>
> Doing so would allow people to chose between the default  (current)
> output, and something more "computable".

Using the JSON formatter currently requires setting a logging
configuration file using the standard library configuration format
and fully specifying things like log levels, handlers, and output
destination. Would it make sense to add an option in oslo.log to
give deployers an easier way to enable the JSON formatter?
This would actually be very useful.

Doug

>
> Of course, either proposal will require a nice code change in all puppet
> modules (add a new parameter for the foo::logging class, and use that
> new param in the configuration file, and so on), but at least people
> will be able to actually chose.
>
> So, before opening an issue on each launchpad project (that would be…
> long), I'd rather open the discussion in here and, eventually, come to
> some nice, acceptable and accepted solution that would make the
> Openstack Community happy :).
>
> Any thoughts?
>
> Thank you for your attention, feedback and wonderful support for that
> monster project :).
>
> Cheers,
>
> C.
>
>
> ¹
> https://github.com/openstack/oslo.log/blob/master/oslo_log/formatters.py#L166-L235
> ²
> http://eavesdrop.openstack.org/irclogs/%23openstack-oslo/%23openstack-oslo.2017-11-01.log.html#t2017-11-01T13:23:14
> ³ https://github.com/openstack/puppet-nova/blob/master/manifests/logging.pp
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage 

Re: [openstack-dev] Fwd: [Openstack-operators][tc] [keystone][all] v2.0 API removal

2017-10-20 Thread Fox, Kevin M
Ok. Cool. Didn't know that. Sounds like all due diligence was done then (and 
maybe plus some :). Thanks for the background info.

Kevin

From: Morgan Fainberg [morgan.fainb...@gmail.com]
Sent: Friday, October 20, 2017 5:16 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Fwd: [Openstack-operators][tc] [keystone][all] 
v2.0 API removal

Let me clarify a few things regarding the V2.0 removal:

* This has been planned for years at this point. At one time (I am
looking for the documentation, once I find it I'll include it on this
thread) we worked with Nova and the TC to set forth a timeline on the
removal. Part of that agreement was that this was a one-time deal. We
would remove the V2.0 API in favor of the v3 API but would never
remove another API going forward.

   A few reasons for removing the V2.0 API that were discussed and
drove the decision:

   1) The V2.0 API had behavior that was explicitly breaking the security model:

   * A user could authenticate with a scope not the default
domain) which could lead to oddities in enforcement when using v2.0
APIs and introduced a number of edge cases. This could not be fixed
without breaking the V2.0 API contract and every single change to V3
and features required a lot of time to ensure V2.0 was not breaking
and had appropriate translations to/from the different data formats.

   * The V2.0 AUTH API included the token (secure) data in the URL
path, this means that all logs from apache (or other web servers and
wsgi apps) had to be considered privileged and could not be exposed
for debugging purposes (and in some environments may not be accessed
without significant access-controls). This also could not be fixed
without breaking the V2.0 API contract.

   * The V2.0 policy was effectively hard coded (effectively) to
use "admin" and "member" roles. Retrofitting the APIs to support fully
policy was extremely difficult and could break default behaviors
(easily) in many environments. This was also deemed to be mostly
unfix-able without breaking the V2.0 API contract.


 In short, the maintenance on V2.0 API was significant, it was a
lot of work to maintain especially since the API could not receive any
active development due to lacking basic features introduced in v3.0.
There were also a significant number of edge cases where v3 had some
very hack-y support for features (required in openstack services) via
auth to support the possibility of v2->v3 translations.


   2) V2.0 had been bit rotting. Many items had limited testing and
were found to be broken. Adding tests that were both V3 and V2.0 aware
added another layer of difficulty in maintaining the API, much of the
time we had to spin many new patches to ensure that we didn't break
v2.0 contracts with a non-breaking v3 change (or in fixing a v2 API
call, we would be somewhat forced into breaking the API contract).


   3) The Keystone team is acutely aware that this was a painful
transition and made the choice to drop the API even in that light. The
choice of "breaking the API contract" a number of times verses
lightening the developer load (we are strapped for resources working
on Keystone as are many services, the overhead and added load makes it
mostly untenable) and do a single (large) change with the
understanding that V3 APIs cannot be removed was preferable.


The TC agreed to this removal. The service teams agreed to this
removal. This was telegraphed as much as we could via deprecation and
many, many, many discussions on this topic. There really was no good
solution, we took the solution that was the best for OpenStack in our
opinion based upon the place where Keystone is.

We can confidently commit to the following:
  * v3 APIs (Even the ones we dislike) will not go away
  * barring a massive security hole, we will not break the API
contracts on V3] (we may add data, we will not remove/restructure
data)
  * If we implement microversions, you may see API changes (similar to
how nova works), but as of today, we do not implement microversions

We have worked with defcore/refstack, qa teams, all services (I think
we missed one, it has since been fixed), clients, SDK(s), etc to
ensure that as much support as possible is in place to make utilizing
V3 easy.




On Fri, Oct 20, 2017 at 3:50 PM, Fox, Kevin M <kevin@pnnl.gov> wrote:
> No, I'm not saying its the TC teams job to bludgeon folks.
>
> I'm suggesting that some folks other then Keystone should look at the impact 
> of the final removal an api that a lot of external clients may be coded 
> against and since it effects all projects and not just Keystone. And have 
> some say on delaying the final removal if appropriate.
>
> I personally would like to see v2 go away. But I get that the impact could be 
> far wider ranging and affecting many other teams then just Keystone due to 
>

Re: [openstack-dev] Fwd: [Openstack-operators][tc] [keystone][all] v2.0 API removal

2017-10-20 Thread Fox, Kevin M
No, I'm not saying its the TC teams job to bludgeon folks.

I'm suggesting that some folks other then Keystone should look at the impact of 
the final removal an api that a lot of external clients may be coded against 
and since it effects all projects and not just Keystone. And have some say on 
delaying the final removal if appropriate. 

I personally would like to see v2 go away. But I get that the impact could be 
far wider ranging and affecting many other teams then just Keystone due to the 
unique position Keystone is in the architecture. As others have raised.

Ideally, there should be an OpenStack overarching architecture team of some 
sort to handle this kind of thing I think. Without such an entity though, I 
think the TC is probably currently the best place to discuss it though?

Thanks,
Kevin

From: Jeremy Stanley [fu...@yuggoth.org]
Sent: Friday, October 20, 2017 10:53 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] Fwd: [Openstack-operators][tc] [keystone][all] 
v2.0API removal

On 2017-10-20 17:15:59 + (+), Fox, Kevin M wrote:
[...]
> I know the TC's been shying away from these sorts of questions,
> but this one has a pretty big impact. TC?
[...]

The OpenStack Technical Committee isn't really a bludgeon with which
to beat teams when someone in the community finds fault with a
decision; it drafts/revises policy and arbitrates disputes between
teams. What sort of action are you seeking in regard to the Keystone
team finally acting this cycle on removal of their long-deprecated
legacy API, and with what choices of theirs do you disagree?

Do you feel the deprecation was not communicated widely enough? Do
you feel that SDKs haven't been updated with sufficient support for
the v3 API? Are you concerned that lack of v2 API support will
prevent organizations running the upcoming Queens release from
qualifying for interoperability trademarks? Something else entirely?
--
Jeremy Stanley

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: [Openstack-operators][tc] [keystone][all] v2.0 API removal

2017-10-20 Thread Fox, Kevin M
That is a very interesting question.

It comes from the angle of OpenStack the product more then from the standpoint 
of any one OpenStack project.

I know the TC's been shying away from these sorts of questions, but this one 
has a pretty big impact. TC?

Thanks,
Kevin

From: Yaguang Tang [heut2...@gmail.com]
Sent: Thursday, October 19, 2017 7:59 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] Fwd: [Openstack-operators][tc] [keystone][all] v2.0 
API removal

Should this kind of change  be discussed and have an agreement of the TC and 
User committee?

-- Forwarded message --
From: Lance Bragstad >
Date: Fri, Oct 20, 2017 at 12:08 AM
Subject: [Openstack-operators] [keystone][all] v2.0 API removal
To: "OpenStack Development Mailing List (not for usage questions)" 
>, 
openstack-operat...@lists.openstack.org



Hey all,

Now that we're finishing up the last few bits of v2.0 removal, I'd like to send 
out a reminder that Queens will not include the v2.0 keystone APIs except the 
ec2-api. Authentication and validation of v2.0 tokens has been removed (in 
addition to the public and admin APIs) after a lengthy deprecation period.

Let us know if you have any questions.

Thanks!

___
OpenStack-operators mailing list
openstack-operat...@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




--
Tang Yaguang



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][Kolla] Concerns about containers images in DockerHub

2017-10-19 Thread Fox, Kevin M
For kolla, we were thinking about a couple of optimization that should greatly 
reduce the space.

1. only upload to the hub based on stable versions. The updates are much less 
frequent.
2. fingerprint the containers. base it on rpm/deb list, pip list, git 
checksums. If the fingerprint is the same, don't reupload a container. Nothing 
really changed but some trivial files or timestamps on files.

Also, remember the apparent size of a container is not the same as the actual 
size. Due to layering, the actual size is often significantly smaller then what 
shows up in 'docker images'. For example, this 
http://tarballs.openstack.org/kolla-kubernetes/gate/containers/centos-binary-ceph.tar.bz2
 is only 1.2G and contains all the containers needed for a compute kit 
deployment.

For trunk based builds, it may still be a good idea to only mirror those to 
tarballs.o.o or a openstack provided docker repo that infra has been discussing?

Thanks,
Kevin

From: Gabriele Cerami [gcer...@redhat.com]
Sent: Thursday, October 19, 2017 8:03 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [TripleO][Kolla] Concerns about containers images  
in DockerHub

Hi,

our CI scripts are now automatically building, testing and pushing
approved openstack/RDO services images to public repositories in
dockerhub using ansible docker_image module.

Promotions have had some hiccups, but we're starting to regularly upload
new images every 4 hours.

When we'll get at full speed, we'll potentially have
- 3-4 different sets of images, one per release of openstack (counting a
  EOL release grace period)
- 90-100 different services images per release
- 4-6 different versions of the same image ( keeping older promoted
  images for a while )

At around 300MB per image a possible grand total is around 650GB of
space used.

We don't know if this is acceptable usage of dockerhub space and for
this we already sent a similar email the to docker support to ask
specifically if the user would get penalty in any way (e.g. enforcing
quotas, rete limiting, blocking). We're still waiting for a reply.

In any case it's critical to keep the usage around the estimate, and to
achieve this we need a way to automatically delete the older images.
docker_image module does not provide this functionality, and we think
the only way is issuing direct calls to dockerhub API

https://docs.docker.com/registry/spec/api/#deleting-an-image

docker_image module structure doesn't seem to encourage the addition of
such functionality directly in it, so we may be forced to use the uri
module.
With new images uploaded potentially every 4 hours, this will become a
problem to be solved within the next two weeks.

We'd appreciate any input for an existing, in progress and/or better
solution for bulk deletion, and issues that may arise with our space
usage in dockerhub

Thanks

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [stable] [tripleo] [kolla] [ansible] [puppet] Proposing changes in stable policy for installers

2017-10-17 Thread Fox, Kevin M
So, my $0.02.

A supported/recent version of a tool to install an unsupported version of a 
software is not a bad thing.

OpenStack has a bad reputation (somewhat deservedly) for being hard to upgrade. 
This has mostly gotten better over time but there are still a large number of 
older, unsupported deployments at this point.

Sometimes, burning down the cloud isn't an option and sometimes upgrading in 
place isn't an option either, and they are stuck on an unsupported version.

Being able to deploy with a more modern installer the same version of the cloud 
your running in production and shift the load to it (sideways upgrade), but 
then have an upgrade path provided by the tool would be a great thing.

Thanks,
Kevin

From: Michał Jastrzębski [inc...@gmail.com]
Sent: Monday, October 16, 2017 3:50 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [tc] [stable] [tripleo] [kolla] [ansible] [puppet] 
Proposing changes in stable policy for installers

So my 0.02$

Problem with handling Newton goes beyond deployment tools. Yes, it's
popular to use, but if our dependencies (openstack services
themselves) are unmaintained, so should we. If we say "we support
Newton" in deployment tools, we make kind of promise we can't keep. If
for example there is CVE in Nova that affects Newton, there is nothing
we can do about it and our "support" is meaningless.

Not having LTS kind of model was issue for OpenStack operators
forever, but that's not problem we can solve in deployment tools
(although we are often asked for that because our communities are
largely operators and we're arguably closest projects to operators).

I, for one, think we should keep current stable policy, not make
exception for deployment tools, and address this issue across the
board. What Emilien is describing is real issue that hurts operators.

On 16 October 2017 at 15:38, Emilien Macchi  wrote:
> On Mon, Oct 16, 2017 at 4:27 AM, Thierry Carrez  wrote:
>> Emilien Macchi wrote:
>>> [...]
>>> ## Proposal
>>>
>>> Proposal 1: create a new policy that fits for projects like installers.
>>> I kicked-off something here: https://review.openstack.org/#/c/511968/
>>> (open for feedback).
>>> Content can be read here:
>>> http://docs-draft.openstack.org/68/511968/1/check/gate-project-team-guide-docs-ubuntu-xenial/1a5b40e//doc/build/html/stable-branches.html#support-phases
>>> Tag created here: https://review.openstack.org/#/c/511969/ (same,
>>> please review).
>>>
>>> The idea is really to not touch the current stable policy and create a
>>> new one, more "relax" that suits well for projects like installers.
>>>
>>> Proposal 2: change the current policy and be more relax for projects
>>> like installers.
>>> I haven't worked on this proposal while it was something I was
>>> considering doing first, because I realized it could bring confusion
>>> in which projects actually follow the real stable policy and the ones
>>> who have exceptions.
>>> That's why I thought having a dedicated tag would help to separate them.
>>>
>>> Proposal 3: no change anywhere, projects like installer can't claim
>>> stability etiquette (not my best option in my opinion).
>>>
>>> Anyway, feedback is welcome, I'm now listening. If you work on Kolla,
>>> TripleO, OpenStack-Ansible, PuppetOpenStack (or any project who has
>>> this need), please get involved in the review process.
>>
>> My preference goes to proposal 1, however rather than call it "relaxed"
>> I would make it specific to deployment/lifecycle or cycle-trailing
>> projects.
>>
>> Ideally this policy could get adopted by any such project. The
>> discussion started on the review and it's going well, so let's see where
>> it goes :)
>
> Thierry, when I read your comment on Gerrit I understand you prefer to
> amend the existing policy and just make a note for installers (which
> is I think the option #2 that I proposed). Can you please confirm
> that?
> So far I see option #1 has large consensus here, I'll wait for
> Thierry's answer to continue to work on it.
>
> Thanks for the feedback so far!
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

Re: [openstack-dev] [all][tc] TC Candidates: what does an OpenStack user look like?

2017-10-12 Thread Fox, Kevin M
I slightly disagree. I think there are 3 sets of users not 2...
Operators, Tenant Users, and Tenant Application Developers.

Tenant Application Developers develop software that the Tenant Users deploy in 
their tenant.

Most OpenStack developers consider the latter two to always be the same person. 
And it has made it very difficult to use for Tenant Users that aren't Tenant 
Application Developers to use OpenStack.

Sometimes Tenant Users are pure ops, not devops. Sometimes they are not even 
traditional CS folks but physicists, biologists, etc.

Thanks,
Kevin


From: Fei Long Wang [feil...@catalyst.net.nz]
Sent: Thursday, October 12, 2017 4:16 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all][tc] TC Candidates: what does an OpenStack 
user look like?

That's one of the points I mentioned in my candidacy: whom we're building the 
software for. As a service maintainer and upstream developer of a public cloud 
based on OpenStack, I would say some times we're mixing the term 'users'. The 
user in OpenStack world includes operators and tenant users (developers or 
devops using the cloud). We have done a good job to get the feedback from 
operators with "user" survey, operators mailing list, etc. But we don't have a 
good way to hear the voices from those tenant users, including developers and 
devops. And that's very important for the near future of OpenStack.


On 13/10/17 10:34, Emilien Macchi wrote:

Replying on top of Mohammed, since I like his answer and want to add
some comments.

On Thu, Oct 12, 2017 at 12:07 PM, Mohammed Naser 
 wrote:
[...]



Ideally, I think that OpenStack should be targeted to become a core
infrastructure tool that's part of organizations all around the world
which can deliver both OpenStack-native services (think Nova for VMs,
Cinder for block storage) and OpenStack-enabled services (think Magnum
which deployed Kubernetes integrated with OpenStack, Sahara which
deploys Big Data software integrated with Swift).

This essentially makes OpenStack sit at the heart of the operations of
every organization (ideally!).  It also translates well with
OpenStack's goal of providing a unified set of APIs and interfaces
which are always predictable to do the operations that you expect them
to do.  With time, this will make OpenStack much more accessible, as
it becomes very easy to interact with as any individuals move from one
organization to another.


I agree a lot with Mohammed here. I also like to think we build
OpenStack to place it at the heart of all organizations consuming
infrastructure at any scale or any architecture.
It can be some pieces from OpenStack or a whole set of services
working together.
Also like he said, providing set of API, that are well known; I would
add "stable APIs" (see discussions with Glare / Glance) and ensure
some "perennity" for our end-users.

Having talked with some users, some folks say "OpenStack becomes
boring and we like it". Pursuing the discussion, they like to have
long life API support and stability in how they operate. I think at a
TC level we need to make sure we can both innovate and maintain this
stability at a certain level.

[...]



--
Cheers & Best regards,
Feilong Wang (王飞龙)
--
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flw...@catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
--
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][nova] Persistent application credentials

2017-10-10 Thread Fox, Kevin M
Big +1 for reevaluating the bigger picture. We have a pile of api's that 
together don't always form the most useful of api's due to lack of big picture 
analysis.

+1 to thinking through the dev's/devops use case.

Another one to really think over is single user that != application developer. 
IE, Pure user type person deploying cloud app in their tenant written by dev 
not employees by user's company. User shouldn't have to go to Operator to 
provision service accounts and other things. App dev should be able to give 
everything needed to let OpenStack launch say a heat template that provisions 
the service accounts for the User, not making the user twiddle the api 
themselves. It should be a "here, launch this" kind of thing, and they fill out 
the heat form, and out pops a working app. If they have to go prevision a bunch 
of stuff themselves before passing stuff to the form, game over. Likewise, if 
they have to look at yaml, game over. How do app credentials fit into this?

Thanks,
Kevin


From: Zane Bitter [zbit...@redhat.com]
Sent: Monday, October 09, 2017 9:39 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [keystone][nova] Persistent application credentials

On 12/09/17 18:58, Colleen Murphy wrote:
> While it's fresh in our minds, I wanted to write up a short recap of
> where we landed in the Application Credentials discussion in the BM/VM
> room today. For convenience the (as of yet unrevised) spec is here:

Thanks so much for staying on this Colleen, it's tremendously helpful to
have someone from the core team keeping an eye on it :)

> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/backlog/application-credentials.html
>
> Attached are images of the whiteboarded notes.
>
> On the contentious question of the lifecycle of an application
> credential, we re-landed in the same place we found ourselves in when
> the spec originally landed, which is that the credential becomes invalid
> when its creating user is disabled or deleted. The risk involved in
> allowing a credential to continue to be valid after its creating user
> has been disabled is not really surmountable, and we are basically
> giving up on this feature. The benefits we still get from not having to
> embed user passwords in config files, especially for LDAP or federated
> users, is still a vast improvement over the situation today, as is the
> ability to rotate credentials.

OK, there were lots of smart people in the room so I trust that y'all
made the right decision.

I'd just like to step back for a moment though and ask: how exactly do
we expect users to make use of Keystone?

When I think about a typical OpenStack user of the near future, they
looks something like this: there's a team with a handful of developers,
with maybe one or two devops engineers. This team is responsible for a
bunch of applications, at various stages in their lifecycles. They work
in a department with several such teams, in an organisation with several
such departments. People regularly join or leave the team - whether
because they join or leave the organisation or just transfer between
different teams. The applications are deployed with Heat and are at
least partly self-managing (e.g. they use auto-scaling, or auto-healing,
or have automated backups, or all of the above), but also require
occasional manual intervention (beyond just a Heat stack-update). The
applications may be deployed to a private OpenStack cloud, a public
OpenStack cloud, or both, with minimal differences in how they work when
moving back and forth.

(Obviously the beauty of Open Source is that we don't think about only
one set of users. But I think if we can serve this set of users as a
baseline then we have built something pretty generically useful.)

So my question is: how do we recommend these users use Keystone? We
definitely _can_ support them. But the most workable way I can think of
would be to create a long-lived application user account for each
project in LDAP/ActiveDirectory/whatever and have that account manage
the application. Then things will work basically the same way in the
public cloud, where you also get a user per project. Hopefully some
auditability is maintained by having Jenkins/Zuul/Solum/whatever do the
pushing of changes to Heat, although realistically many users will not
be that sophisticated. Once we have application credentials, the folks
doing manual intervention would be able to do so in the same way on
public clouds as on private clouds, without being given the account
credentials.

Some observations about this scenario:
* The whole user/role infrastructure is completely unused - 'Users' are
1:1 with projects. We might as well not have built it.
* Having Keystone backed by LDAP/ActiveDirectory is arguably worse than
useless - it just means there are two different places to set things up
when creating a project and an extra layer of indirection. (I won't say
we might as well 

Re: [openstack-dev] Supporting SSH host certificates

2017-10-09 Thread Fox, Kevin M
I don't think its unfair to compare against k8s in this case. You have to 
follow the same kinds of steps as an admin provisioning a k8s compute node as 
you do an openstack compute node. The main difference I think is they make use 
of the infrastructure that was put in place by the operator, making it 
available to the user in a more friendly way, while currently we ask the user 
to manually piece together a secure path themselves utilizing back channels 
that the operator secured (consoles)

As far as console scraping, as standard a practice as it is, isn't very well 
adopted. Most folks I've seen just ignore the ssh stuff entirely and live with 
the man in the middle risk. So, while a standard, its an infrequently used one, 
IMO.

Theres a temporal issue too. Standing up a new compute node happens rarely. 
Standing up a new vm should be relatively frequent. As an operator, I'd be ok 
taking on the one time cost burden of setup of the compute nodes if I didn't 
have to worry so much about users doing bad things with ssh.

Thanks,
Kevin

From: Clint Byrum [cl...@fewbar.com]
Sent: Monday, October 09, 2017 1:42 PM
To: openstack-dev
Subject: Re: [openstack-dev] Supporting SSH host certificates

And k8s has the benefit of already having been installed with certs that
had to get there somehow.. through a trust bootstrap.. usually SSH. ;)

Excerpts from Fox, Kevin M's message of 2017-10-09 17:37:17 +:
> Yeah, there is a way to do it today. it really sucks though for most users. 
> Due to the complexity of doing the task though, most users just have gotten 
> into the terrible habit of ignoring the "this host's ssh key changed" and 
> just blindly accepting the change. I kind of hate to say it this way, but 
> because of the way things are done today, OpenStack's training folks to 
> ignore man in the middle attacks. This is not good. We shouldn't just shrug 
> it off and say folks should be more careful. We should try and make the edge 
> less sharp so they are less likely to stab themselves, and later, give 
> OpenStack a bad name because OpenStack was involved.
>

I agree that we could do better.

I think there _is_ a standardized method which is to print the host
public keys to console, and scrape them out on first access.

> (Yeah, I get it is not exactly OpenStack's fault that they use it in an 
> unsafe manner. But still, if OpenStack can do something about it, it would be 
> better for everyone involved)
>

We could do better though. We could have an API for that.

> This is one thing I think k8s is doing really well. kubectl execuses 
> the chain of trust built up from user all the way to the pod. There isn't 
> anything manual the user has to do to secure the path. OpenStack really could 
> benefit from something similar for client to vm.
>

This is an unfair comparison. k8s is running in the user space, and as
such rides on the bootstrap trust of whatever was used to install it.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Supporting SSH host certificates

2017-10-09 Thread Fox, Kevin M
Yeah, there is a way to do it today. it really sucks though for most users. Due 
to the complexity of doing the task though, most users just have gotten into 
the terrible habit of ignoring the "this host's ssh key changed" and just 
blindly accepting the change. I kind of hate to say it this way, but because of 
the way things are done today, OpenStack's training folks to ignore man in the 
middle attacks. This is not good. We shouldn't just shrug it off and say folks 
should be more careful. We should try and make the edge less sharp so they are 
less likely to stab themselves, and later, give OpenStack a bad name because 
OpenStack was involved.

(Yeah, I get it is not exactly OpenStack's fault that they use it in an unsafe 
manner. But still, if OpenStack can do something about it, it would be better 
for everyone involved)

This is one thing I think k8s is doing really well. kubectl execuses 
the chain of trust built up from user all the way to the pod. There isn't 
anything manual the user has to do to secure the path. OpenStack really could 
benefit from something similar for client to vm.

Thanks,
Kevin

From: Clint Byrum [cl...@fewbar.com]
Sent: Friday, October 06, 2017 3:24 PM
To: openstack-dev
Subject: Re: [openstack-dev] Supporting SSH host certificates

Excerpts from Giuseppe de Candia's message of 2017-10-06 13:49:43 -0500:
> Hi Clint,
>
> Isn't user-data by definition available via the Metadata API, which isn't
> considered secure:
> https://wiki.openstack.org/wiki/OSSN/OSSN-0074
>

Correct! The thinking is to account for the MITM attack vector, not
host or instance security as a whole. One would hope the box comes up
in a mostly drone-like state until it can be hardened with a new secret
host key.

> Or is there a way to specify that certain user-data should only be
> available via config-drive (and not metadata api)?
>
> Otherwise, the only difference I see compared to using Meta-data is that
> the process you describe is driven by the user vs. automated.
>
> Regarding the extra plumbing, I'm not trying to avoid it. I'm thinking to
> eventually tie this all into Keystone. For example, a project should have
> Host CA and User CA keys. Let's assume OpenStack manages these for now,
> later we can consider OpenStack simply proxying signature requests and
> vouching that a public key does actually belong to a specific instance (and
> host-name) or Keystone user. So what I think should happen is when a
> Project is enabled for SSHaaS support, any VM instance automatically gets
> host certificate, authorized principal files based on Keystone roles for
> the project, and users can call an API (or Dashboard form) to get a public
> key signed (and assigned appropriate SSH principals).
>

Fascinating, but it's hard for me to get excited about this when I can
just handle MITM security myself.

Note that the other existing techniques are simpler too. Most instances
will print the public host key to the console. The API offers console
access, so it can be scraped for the host key.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [policy] AWS IAM session

2017-10-04 Thread Fox, Kevin M
Yeah. Very interesting. Thanks for sharing.

Kevin

From: Adam Heczko [ahec...@mirantis.com]
Sent: Wednesday, October 04, 2017 2:18 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [policy] AWS IAM session

Hi Devdatta, excellent post on IAM models.
Thank you!

On Wed, Oct 4, 2017 at 10:59 PM, Devdatta Kulkarni 
> wrote:
+1

I spent some time recently studying IAM models of AWS and GCP.
Based on this I had created following post comparing and summarizing the two 
models at high-level:

http://devcentric.io/2017/07/13/comparing-iam-models-of-aws-and-gcp/

Thought of sharing it here as it may help with big-picture comparison of the 
two models.

Best regards,
Devdatta


On Wed, Oct 4, 2017 at 11:12 AM, Kristi Nikolla 
> wrote:
+1

--
  Kristi Nikolla
  Software Engineer @ massopen.cloud
  kri...@nikolla.me

On Wed, Oct 4, 2017, at 10:08 AM, Zane Bitter wrote:
> On 03/10/17 16:08, Lance Bragstad wrote:
> > Hey all,
> >
> > It was mentioned in today's keystone meeting [0] that it would be useful
> > to go through AWS IAM (or even GKE) as a group. With all the recent
> > policy discussions and work, it seems useful to get our eyes on another
> > system. The idea would be to spend time using a video conference/screen
> > share to go through and play with policy together. The end result should
> > keep us focused on the implementations we're working on today, but also
> > provide clarity for the long-term vision of OpenStack's RBAC system.
> >
> > Are you interested in attending? If so, please respond to the thread.
> > Once we have some interest, we can gauge when to hold the meeting, which
> > tools we can use, and setting up a test IAM account.
>
> +1, I'd like to attend this.
>
> Also I highly recommend
> http://start.jcolemorrison.com/aws-iam-policies-in-a-nutshell/ over the
> actual AWS docs as a compact reference.
>
> - ZB
>
> > Thanks,
> >
> > Lance
> >
> > [0]
> > http://eavesdrop.openstack.org/meetings/keystone/2017/keystone.2017-10-03-18.00.log.html#l-119
> >
> >
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: 
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Adam Heczko
Security Engineer @ Mirantis Inc.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] containerized undercloud in Queens

2017-10-04 Thread Fox, Kevin M
FYI, a container with net=host runs exactly like it was running outside of a 
container with respect to iptables/networking. So that should not be an issue. 
If it can be done on the host, it should be able to happen in a container.

Thanks,
Kevin

From: Dan Prince [dpri...@redhat.com]
Sent: Wednesday, October 04, 2017 9:50 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] containerized undercloud in Queens



On Wed, Oct 4, 2017 at 9:10 AM, Dmitry Tantsur 
> wrote:
(top-posting, as it is not a direct response to a specific line)

This is your friendly reminder that we're not quite near containerized 
ironic-inspector. The THT for it has probably never been tested at all, and the 
iptables magic we do may simply not be containers-compatible. Milan would 
appreciate any help with his ironic-inspector rework.


Thanks Dmitry. Exactly the update I was looking for. Look forward to syncing w/ 
Milan on this.

Dan

Dmitry


On 10/04/2017 03:00 PM, Dan Prince wrote:
On Tue, 2017-10-03 at 16:03 -0600, Alex Schultz wrote:
On Tue, Oct 3, 2017 at 2:46 PM, Dan Prince 
>
wrote:


On Tue, Oct 3, 2017 at 3:50 PM, Alex Schultz 
>
wrote:

On Tue, Oct 3, 2017 at 11:12 AM, Dan Prince 
>
wrote:
On Mon, 2017-10-02 at 15:20 -0600, Alex Schultz wrote:
Hey Dan,

Thanks for sending out a note about this. I have a few
questions
inline.

On Mon, Oct 2, 2017 at 6:02 AM, Dan Prince 

m>
wrote:
One of the things the TripleO containers team is planning
on
tackling
in Queens is fully containerizing the undercloud. At the
PTG we
created
an etherpad [1] that contains a list of features that need
to be
implemented to fully replace instack-undercloud.


I know we talked about this at the PTG and I was skeptical
that this
will land in Queens. With the exception of the Container's
team
wanting this, I'm not sure there is an actual end user who is
looking
for the feature so I want to make sure we're not just doing
more work
because we as developers think it's a good idea.

I've heard from several operators that they were actually
surprised we
implemented containers in the Overcloud first. Validating a new
deployment framework on a single node Undercloud (for
operators) before
overtaking their entire cloud deployment has a lot of merit to
it IMO.
When you share the same deployment architecture across the
overcloud/undercloud it puts us in a better position to decide
where to
expose new features to operators first (when creating the
undercloud or
overcloud for example).

Also, if you read my email again I've explicitly listed the
"Containers" benefit last. While I think moving the undercloud
to
containers is a great benefit all by itself this is more of a
"framework alignment" in TripleO and gets us out of maintaining
huge
amounts of technical debt. Re-using the same framework for the
undercloud and overcloud has a lot of merit. It effectively
streamlines
the development process for service developers, and 3rd parties
wishing
to integrate some of their components on a single node. Why be
forced
to create a multi-node dev environment if you don't have to
(aren't
using HA for example).

Lets be honest. While instack-undercloud helped solve the old
"seed" VM
issue it was outdated the day it landed upstream. The entire
premise of
the tool is that it uses old style "elements" to create the
undercloud
and we moved away from those as the primary means driving the
creation
of the Overcloud years ago at this point. The new
'undercloud_deploy'
installer gets us back to our roots by once again sharing the
same
architecture to create the over and underclouds. A demo from
long ago
expands on this idea a bit:  https://www.youtube.com/watch?v=y1
qMDLAf26
Q=5s

In short, we aren't just doing more work because developers
think it is
a good idea. This has potential to be one of the most useful
architectural changes in TripleO that we've made in years.
Could
significantly decrease our CI reasources if we use it to
replace the
existing scenarios jobs which take multiple VMs per job. Is a
building
block we could use for other features like and HA undercloud.
And yes,
it does also have a huge impact on developer velocity in that
many of
us already prefer to use the tool as a means of streamlining
our
dev/test cycles to minutes instead of hours. Why spend hours
running
quickstart Ansible scripts when in many cases you can just
doit.sh. htt
ps://github.com/dprince/undercloud_containers/blob/master/doit.
sh


So like I've repeatedly said, I'm not completely against it as I
agree
what we have is not ideal.  I'm not -2, I'm -1 pending additional
information. I'm trying to be realistic and 

Re: [openstack-dev] Supporting SSH host certificates

2017-09-29 Thread Fox, Kevin M
https://review.openstack.org/#/c/93/

From: Giuseppe de Candia [giuseppe.decan...@gmail.com]
Sent: Friday, September 29, 2017 1:05 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Supporting SSH host certificates

Ihar, thanks for pointing that out - I'll definitely take a close look.

Jon, I'm not very familiar with Barbican, but I did assume the full 
implementation would use Barbican to store private keys. However, in terms of 
actually getting a private key (or SSH host cert) into a VM instance, Barbican 
doesn't help. The instance needs permission to access secrets stored in 
Barbican. The main question of my e-mail is: how do you inject a credential in 
an automated but secure way? I'd love to hear ideas - in the meantime I'll 
study Ihar's link.

thanks,
Pino



On Fri, Sep 29, 2017 at 2:49 PM, Ihar Hrachyshka 
> wrote:
What you describe (at least the use case) seems to resemble
https://review.openstack.org/#/c/456394/ This work never moved
anywhere since the spec was posted though. You may want to revive the
discussion in scope of the spec.

Ihar

On Fri, Sep 29, 2017 at 12:21 PM, Giuseppe de Candia
> wrote:
> Hi Folks,
>
>
>
> My intent in this e-mail is to solicit advice for how to inject SSH host
> certificates into VM instances, with minimal or no burden on users.
>
>
>
> Background (skip if you're already familiar with SSH certificates): without
> host certificates, when clients ssh to a host for the first time (or after
> the host has been re-installed), they have to hope that there's no man in
> the middle and that the public key being presented actually belongs to the
> host they're trying to reach. The host's public key is stored in the
> client's known_hosts file. SSH host certicates eliminate the possibility of
> Man-in-the-Middle attack: a Certificate Authority public key is distributed
> to clients (and written to their known_hosts file with a special syntax and
> options); the host public key is signed by the CA, generating an SSH
> certificate that contains the hostname and validity period (among other
> things). When negotiating the ssh connection, the host presents its SSH host
> certificate and the client verifies that it was signed by the CA.
>
>
>
> How to support SSH host certificates in OpenStack?
>
>
>
> First, let's consider doing it by hand, instance by instance. The only
> solution I can think of is to VNC to the instance, copy the public key to my
> CA server, sign it, and then write the certificate back into the host (again
> via VNC). I cannot ssh without risking a MITM attack. What about using Nova
> user-data? User-data is exposed via the metadata service. Metadata is
> queried via http (reply transmitted in the clear, susceptible to snooping),
> and any compute node can query for any instance's meta-data/user-data.
>
>
>
> At this point I have to admit I'm ignorant of details of cloud-init. I know
> cloud-init allows specifying SSH private keys (both for users and for SSH
> service). I have not yet studied how such information is securely injected
> into an instance. I assume it should only be made available via ConfigDrive
> rather than metadata-service (again, that service transmits in the clear).
>
>
>
> What about providing SSH host certificates as a service in OpenStack? Let's
> keep out of scope issues around choosing and storing the CA keys, but the CA
> key is per project. What design supports setting up the SSH host certificate
> automatically for every VM instance?
>
>
>
> I have looked at Vendor Data and I don't see a way to use that, mainly
> because 1) it doesn't take parameters, so you can't pass the public key out;
> and 2) it's queried over http, not https.
>
>
>
> Just as a feasibility argument, one solution would be to modify Nova compute
> instance boot code. Nova compute can securely query a CA service asking for
> a triplet (private key, public key, SSH certificate) for the specific
> hostname. It can then inject the triplet using ConfigDrive. I believe this
> securely gets the private key into the instance.
>
>
>
> I cannot figure out how to get the equivalent functionality without
> modifying Nova compute and the boot process. Every solution I can think of
> risks either exposing the private key or vulnerability to a MITM attack
> during the signing process.
>
>
>
> Your help is appreciated.
>
>
>
> --Pino
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack 

Re: [openstack-dev] [ptg] Simplification in OpenStack

2017-09-29 Thread Fox, Kevin M
Its easier to convince the developers employer to keep paying the developer 
when their users (operators) want to use their stuff. Its a longer term 
strategic investment. But a critical one. I think this has been one of the 
things holding OpenStack back of late. The developers continuously push off 
hard issues to operators that may have other, better solutions. I don't feel 
this is out of malice but more out of lack of understanding on what operators 
do. The operators are starting to push back and are looking at alternatives 
now. We need to break this trend before it accelerates and more developers can 
no longer afford to work on OpenStack. I'd be happy as an operator to work with 
developers to identify pain points so they can be resolved in more operator 
friendly ways.

Thanks,
Kevin

From: Ben Nemec [openst...@nemebean.com]
Sent: Friday, September 29, 2017 6:43 AM
To: OpenStack Development Mailing List (not for usage questions); Rochelle 
Grober
Subject: Re: [openstack-dev] [ptg] Simplification in OpenStack

On 09/26/2017 09:13 PM, Rochelle Grober wrote:
> Clint Byrum wrote:
>> Excerpts from Jonathan Proulx's message of 2017-09-26 16:01:26 -0400:
>>> On Tue, Sep 26, 2017 at 12:16:30PM -0700, Clint Byrum wrote:
>>>
>>> :OpenStack is big. Big enough that a user will likely be fine with
>>> learning :a new set of tools to manage it.
>>>
>>> New users in the startup sense of new, probably.
>>>
>>> People with entrenched environments, I doubt it.
>>>
>>
>> Sorry no, I mean everyone who doesn't have an OpenStack already.
>>
>> It's nice and all, if you're a Puppet shop, to get to use the puppet modules.
>> But it doesn't bring you any closer to the developers as a group. Maybe a few
>> use Puppet, but most don't. And that means you are going to feel like
>> OpenStack gets thrown over the wall at you once every
>> 6 months.
>>
>>> But OpenStack is big. Big enough I think all the major config systems
>>> are fairly well represented, so whether I'm right or wrong this
>>> doesn't seem like an issue to me :)
>>>
>>
>> They are. We've worked through it. But that doesn't mean potential users
>> are getting our best solution or feeling well integrated into the community.
>>
>>> Having common targets (constellations, reference architectures,
>>> whatever) so all the config systems build the same things (or a subset
>>> or superset of the same things) seems like it would have benefits all
>>> around.
>>>
>>
>> It will. It's a good first step. But I'd like to see a world where 
>> developers are
>> all well versed in how operators actually use OpenStack.
>
> Hear, hear!  +1000  Take a developer to work during peak operations.

Or anytime really.  One of the best experiences I had was going on-site
to some of our early TripleO users and helping them through the install
process.  It was eye-opening to see someone who wasn't already immersed
in the project try to use it.  In a relatively short time they pointed
out a number of easy opportunities for simplification (why is this two
steps instead of one?  Umm, no good reason actually.).

I've pushed for us to do more of that sort of thing, but unfortunately
it's a hard sell to take an already overworked developer away from their
day job for a week to focus on one specific user. :-/

>
> For Walmart, that would be Black Firday/Cyber Monday.
> For schools, usually a few days into the new session.
> For otherseach has a time when things break more.  Having a developer 
> experience what operators do to predict/avoid/recover/work around the normal 
> state of operations would help each to understand the macro work flows.  
> Those are important, too.  Full stack includes Ops.
>
> < Snark off />
>
> --Rocky
>
>>
>> __
>> 
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-
>> requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack-operators] Case studies on Openstack HA architecture

2017-08-28 Thread Fox, Kevin M
kolla has various containerization tools. one based on ansible, another based 
on kubernetes.

From: Imtiaz Chowdhury [imtiaz.chowdh...@workday.com]
Sent: Monday, August 28, 2017 5:24 PM
To: Curtis
Cc: openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] Case studies on Openstack HA architecture

Thanks Curtis, Robert, David and Mohammed for your responses.

As a follow up question, do you use any deployment automation tools for setting 
up the HA control plane? I can see the value of deploying each service in 
separate virtual environment or containers but automating such deployment 
requires developing some new tools. Openstack-ansible is one potential 
deployment tool that I am aware of but that had limited support CentOS.

Imtiaz

On 8/28/17, 2:23 PM, "Curtis"  wrote:

On Fri, Aug 25, 2017 at 6:11 PM, Imtiaz Chowdhury
 wrote:
> Hi Openstack operators,
>
>
>
> Most Openstack HA deployment use 3 node database cluster, 3 node rabbitMQ
> cluster and 3 Controllers. I am wondering whether there are any studies 
done
> that show the pros and cons of co-locating database and messaging service
> with the Openstack control services.  In other words, I am very interested
> in learning about advantages and disadvantages, in terms of ease of
> deployment, upgrade and overall API performance, of having 3 all-in-one
> Openstack controller over a more distributed deployment model.

I'm not aware of any actual case studies, but this is the (current)
default model for tripleo and its downstream product, so there would
be a lot of deployments like this out there in the wild. In the
default deployment everything but compute is on these 3 nodes running
on the physical OS.

Do you mean 3 physical servers with everything running on the physical OS?

My opinion is that 3 physical nodes to run all the control plane
services is quite common, but in custom deployments I either run vms
and containers on those or just containers. I'd use at least lxc to
segregate services into their own containers.

I would also suggest that using those same physical servers as
north/south "network nodes" (which you probably don't have as I
believe workday is a big opencontrail user) or hosts for stateful
metric systems (ie. mongodb) can cause issues performance wise, but
co-located mysql/galera and rabbit on the same nodes as the rest of
the openstack control plane hasn't been a problem for me yet, but with
containers I could split them out fairly easily if needed.

Thanks,
Curtis.

>
>
>
> References to any work done in this area will be highly appreciated.
>
>
>
> Thanks,
> Imtiaz
>
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> 
https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Doperators=DwIBaQ=DS6PUFBBr_KiLo7Sjt3ljp5jaW5k2i9ijVXllEdOozc=FzzkP-wZtxwR_lHv11gV2RDLNwSuEtI-Ttdh3mloOUA=byM_0ToLQ8JCji20rvyQJYr-Zm4pHsZ5TK4CFkuZbbk=mLtaaDedoNufBf-kCreLrdZ-McNRAuuesR3xWIT76Vc=
>



--
Blog: serverascode.com


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [openstack-dev] [kolla-kubernetes] Proposing Rich Wellum to core team

2017-08-14 Thread Fox, Kevin M
+1

From: Surya Prakash Singh [surya.si...@nectechnologies.in]
Sent: Monday, August 14, 2017 2:25 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [kolla-kubernetes] Proposing Rich Wellum   to  
coreteam

+1

from myside too :)
nice work @Rich
I know I am not the core in kolla-k8s, but I tested quite cool tool developed 
for AIO OpenStack deployment on Kubernetes with kolla-k8s.

---
Thanks
Surya Prakash (spsurya)

-Original Message-
From: Michał Jastrzębski [mailto:inc...@gmail.com]
Sent: Friday, August 11, 2017 9:32 PM
To: OpenStack Development Mailing List (not for usage questions) 

Subject: [openstack-dev] [kolla-kubernetes] Proposing Rich Wellum to core team

Hello,

It's my pleasure to start another core team vote. This time for our colleague 
rwellum. I propose that he joins kolla-kubernetes team.

This is my +1 vote. Every kolla-kubernetes core has a vote and it can be 
veto'ed.

Voting will last 2 weeks and will end at 25th of August.

Cheers,
Michal

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][kubernetes] Webhook PoC for Keystone based Authentication and Authorization for Kubernetes

2017-08-08 Thread Fox, Kevin M
Down that path lies tears. :/

From: joehuang [joehu...@huawei.com]
Sent: Tuesday, August 08, 2017 10:21 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: kubernetes-sig-openst...@googlegroups.com
Subject: Re: [openstack-dev] [keystone][kubernetes] Webhook PoC for Keystone 
based Authentication and Authorization for Kubernetes

Except webhook, how about custom module(call keystone API directly from custom 
module) for authorization? ( 
https://kubernetes.io/docs/admin/authorization/#custom-modules )

Webhook:
Pros.: http calling, loose coupling, more flexible configuration.
Cons.: Degraded performance, one more hop
custom module:
Pros.: direct function call, better performance, less process to 
maintain.
Cons.: coupling, built-in module.

Best Regards
Chaoyi Huang (joehuang)


From: Morgan Fainberg [morgan.fainb...@gmail.com]
Sent: 09 August 2017 12:26
To: OpenStack Development Mailing List (not for usage questions)
Cc: kubernetes-sig-openst...@googlegroups.com
Subject: Re: [openstack-dev] [keystone][kubernetes] Webhook PoC for Keystone 
based Authentication and Authorization for Kubernetes

I shall take a look at the webhooks and see if I can help on this front.

--Morgan

On Tue, Aug 8, 2017 at 6:34 PM, joehuang  wrote:
> Dims,
>
> Integration of keystone and kubernetes is very cool and in high demand. Thank 
> you very much.
>
> Best Regards
> Chaoyi Huang (joehuang)
>
> 
> From: Davanum Srinivas [dava...@gmail.com]
> Sent: 01 August 2017 18:03
> To: kubernetes-sig-openst...@googlegroups.com; OpenStack Development Mailing 
> List (not for usage questions)
> Subject: [openstack-dev] [keystone][kubernetes] Webhook PoC for Keystone 
> based Authentication and Authorization for Kubernetes
>
> Team,
>
> Having waded through the last 4 attempts as seen in kubernetes PR(s)
> and Issues and talked to a few people on SIG-OpenStack slack channel,
> the consensus was that we should use the Webhook mechanism to
> integrate Keystone and Kubernetes.
>
> Here's the experiment : https://github.com/dims/k8s-keystone-auth
>
> Anyone interested in working on / helping with this? Do we want to
> create a repo somewhere official?
>
> Thanks,
> Dims
>
> --
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.config] Pluggable drivers and protect plaintext secrets

2017-08-04 Thread Fox, Kevin M
Yeah, but you still run into stuff like db contact and driver information being 
mixed up with secret used for contacting that service. Those should be separate 
fields I think so they can be split/merged with that mechanism.

Thanks,
Kevin

From: Doug Hellmann [d...@doughellmann.com]
Sent: Friday, August 04, 2017 1:49 PM
To: openstack-dev
Subject: Re: [openstack-dev] [oslo][oslo.config] Pluggable drivers and  protect 
plaintext secrets

Excerpts from Fox, Kevin M's message of 2017-08-04 20:21:19 +:
> I would really like to see secrets separated from config. Always have... They 
> are two separate things.
>
> If nothing else, a separate config file so it can be permissioned differently.
>
> This could be combined with k8s secrets/configmaps better too.
> Or make it much easier to version config in git and have secrets somewhere 
> else.

Sure. It's already possible today to use multiple configuration
files with oslo.config, using either the --config-dir option or by
passing multiple --config-file options.

Doug

>
> Thanks,
> Kevin
>
> 
> From: Raildo Mascena de Sousa Filho [rmasc...@redhat.com]
> Sent: Friday, August 04, 2017 12:34 PM
> To: openstack-dev@lists.openstack.org
> Subject: [openstack-dev] [oslo][oslo.config] Pluggable drivers and protect 
> plaintext secrets
>
> Hi all,
>
> We had a couple of discussions with the Oslo team related to implement 
> Pluggable drivers for oslo.config[0] and use those feature to implement 
> support to protect plaintext secret on configuration files[1].
>
> In another hand, due the containerized support on OpenStack services, we have 
> a community effort to implement a k8s ConfigMap support[2][3], which might 
> make us step back and consider how secret management will work, since the 
> config data will need to go into the configmap *before* the container is 
> launched.
>
> So, I would like to see what the community think. Should we continue working 
> on that pluggable drivers and protect plain text secrets support for 
> oslo.config? Makes sense having a PTG session[4] on Oslo to discuss that 
> feature?
>
> Thanks for the feedback in advance.
>
> Cheers,
>
> [0] https://review.openstack.org/#/c/454897/
> [1] https://review.openstack.org/#/c/474304/
> [2] 
> https://github.com/flaper87/keystone-k8s-ansible/blob/6524b768d75a28adf44c74aca77ccf13dd66b1a9/provision-keystone-apb/tasks/main.yaml#L71-L108
> [3] 
> https://kubernetes.io/docs/tasks/configure-pod-container/configmap/
> [4] https://etherpad.openstack.org/p/oslo-ptg-queens

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.config] Pluggable drivers and protect plaintext secrets

2017-08-04 Thread Fox, Kevin M
+1. Please keep me in the loop for when the PTG session is.

Thanks,
Kevin

From: Doug Hellmann [d...@doughellmann.com]
Sent: Friday, August 04, 2017 12:46 PM
To: openstack-dev
Subject: Re: [openstack-dev] [oslo][oslo.config] Pluggable drivers and  protect 
plaintext secrets

Excerpts from Raildo Mascena de Sousa Filho's message of 2017-08-04 19:34:25 
+:
> Hi all,
>
> We had a couple of discussions with the Oslo team related to implement
> Pluggable drivers for oslo.config[0] and use those feature to implement
> support to protect plaintext secret on configuration files[1].
>
> In another hand, due the containerized support on OpenStack services, we
> have a community effort to implement a k8s ConfigMap support[2][3], which
> might make us step back and consider how secret management will work, since
> the config data will need to go into the configmap *before* the container
> is launched.
>
> So, I would like to see what the community think. Should we continue
> working on that pluggable drivers and protect plain text secrets support
> for oslo.config? Makes sense having a PTG session[4] on Oslo to discuss
> that feature?

A PTG session does make sense.

My main concern is that the driver approach described is a fairly
significant change to the library. I was more confident that it made
sense when it was going to be used for multiple purposes. There may be a
less invasive way to handle secret storage. Or, we might be able to
design a system-level approach for handling those that doesn't require
changing the library at all. So let's not frame the discussion as
"should we add plugins to oslo.config" but "how should we handle secret
values in configuration files".

Doug

>
> Thanks for the feedback in advance.
>
> Cheers,
>
> [0] https://review.openstack.org/#/c/454897/
> [1] https://review.openstack.org/#/c/474304/
> [2]
> https://github.com/flaper87/keystone-k8s-ansible/blob/6524b768d75a28adf44c74aca77ccf13dd66b1a9/provision-keystone-apb/tasks/main.yaml#L71-L108
> [3] https://kubernetes.io/docs/
> 
> tasks/configure-pod-container/configmap/
> 
> [4] https://etherpad.openstack.org/p/oslo-ptg-queens

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][oslo.config] Pluggable drivers and protect plaintext secrets

2017-08-04 Thread Fox, Kevin M
I would really like to see secrets separated from config. Always have... They 
are two separate things.

If nothing else, a separate config file so it can be permissioned differently.

This could be combined with k8s secrets/configmaps better too.
Or make it much easier to version config in git and have secrets somewhere else.

Thanks,
Kevin


From: Raildo Mascena de Sousa Filho [rmasc...@redhat.com]
Sent: Friday, August 04, 2017 12:34 PM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [oslo][oslo.config] Pluggable drivers and protect 
plaintext secrets

Hi all,

We had a couple of discussions with the Oslo team related to implement 
Pluggable drivers for oslo.config[0] and use those feature to implement support 
to protect plaintext secret on configuration files[1].

In another hand, due the containerized support on OpenStack services, we have a 
community effort to implement a k8s ConfigMap support[2][3], which might make 
us step back and consider how secret management will work, since the config 
data will need to go into the configmap *before* the container is launched.

So, I would like to see what the community think. Should we continue working on 
that pluggable drivers and protect plain text secrets support for oslo.config? 
Makes sense having a PTG session[4] on Oslo to discuss that feature?

Thanks for the feedback in advance.

Cheers,

[0] https://review.openstack.org/#/c/454897/
[1] https://review.openstack.org/#/c/474304/
[2] 
https://github.com/flaper87/keystone-k8s-ansible/blob/6524b768d75a28adf44c74aca77ccf13dd66b1a9/provision-keystone-apb/tasks/main.yaml#L71-L108
[3] 
https://kubernetes.io/docs/tasks/configure-pod-container/configmap/
[4] https://etherpad.openstack.org/p/oslo-ptg-queens
--

Raildo mascena

Software Engineer, Identity Managment

Red Hat



[https://www.redhat.com/files/brand/email/sig-redhat.png]
TRIED. TESTED. TRUSTED.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Logging in containerized services

2017-07-19 Thread Fox, Kevin M
FYI, in kolla-kubernes, I've been playing with fluent-bit as a log shipper. 
Works very similar to fluentd but is much lighter weight. I used this: 
https://github.com/kubernetes/charts/tree/master/stable/fluent-bit

I fought with getting log rolling working properly with log files and its kind 
of a pain. A lot of things that can go wrong.

I ended up getting the following to work pretty well:
1. configure docker to roll its own log files based on size.
2. switch containers to use stderror/stdout instead of log files.
3. use fluent-bit to follow docker logs, add k8s pod info and ship to central 
server.

Thanks,
Kevin

From: Bogdan Dobrelya [bdobr...@redhat.com]
Sent: Wednesday, July 19, 2017 1:26 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [tripleo] Logging in containerized services

On 18.07.2017 21:27, Lars Kellogg-Stedman wrote:
> Our current model for logging in a containerized deployment has pretty
> much everything logging to files in a directory that has been
> bind-mounted from the host.  This has some advantages: primarily, it
> makes it easy for an operator on the local system to find logs,
> particularly if they have had some previous exposure to
> non-containerized deployments.
>
> There is strong demand for a centralized logging solution.  We've got
> one potential solution right now in the form of the fluentd service
> introduced in Newton, but this requires explicit registration of log
> files for every service.  I don't think it's an ideal solution, and I
> would like to explore some alternatives.
>
> Logging via syslog
> ==
>
> For the purposes of the following, I'm going to assume that we're
> deploying on an EL-variant (RHEL/CentOS/etc), which means (a) journald
> owns /dev/log and (b) we're running rsyslog on the host and using the
> omjournal plugin to read messages from journald.
>
> If we bind mount /dev/log into containers and configure openstack
> services to log via syslog rather than via files, we get the following
> for free:
>
> - We get message-based rather than line-based logging.  This means that
> multiline tracebacks are handled correctly.
>
> - A single point of collection for logs.  If your host has been
> configured to ship logs to a centralized collector, logs from all of
> your services will be sent there without any additional configuration.
>
> - We get per-service message rate limiting from journald.
>
> - Log messages are annotated by journald with a variety of useful
> metadata, including the container id and a high resolution timestamp.
>
> - We can configure the syslog service on the host to continue to write
> files into legacy locations, so an operator looking to run grep against
> local log files will still have that ability.
>
> - Ryslog itself can send structured messages directly to an Elastic
> instance, which means that in a many deployments we would not require
> fluentd and its dependencies.
>
> - This plays well in environments where some services are running in
> containers and others are running on the host, because everything simply
> logs to /dev/log.

Plus it solves the log rotation (still have to be addressed [0] for Pike
though), out-of-box.

>
> Logging via stdin/stdout
> ==
>
> A common pattern in the container world is to log everything to
> stdout/stderr.  This has some of the advantages of the above:
>
> - We can configure the container orchestration service to send logs to
> the journal or to another collector.
>
> - We get a different set of annotations on log messages.
>
> - This solution may play better with frameworks like Kubernetes that
> tend to isolate containers from the host a little more than using Docker
> or similar tools straight out of the box.
>
> But there are some disadvantages:
>
> - Some services only know how to log via syslog (e.g., swift and haproxy)
>
> - We're back to line-based vs. message-based logging.
>
> - It ends up being more difficult to expose logs at legacy locations.
>
> - The container orchestration layer may not implement the same message
> rate limiting we get with fluentd.
>
> Based on the above, I would like to suggest exploring a syslog-based
> logging model moving forward. What do people think about this idea? I've
> started putting together a spec
> at https://review.openstack.org/#/c/484922/ and I would welcome your input.

My vote goes for this option, but TBD for Queens. It won't make it for
Pike, as it looks too late for such amount of drastic changes, like
switching all OpenStack services to syslog, deploying additional
required components, and so on.

[0] https://bugs.launchpad.net/tripleo/+bug/1700912
[1] https://review.openstack.org/#/c/462900/

>
> Cheers,
>
> --
> Lars Kellogg-Stedman >
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: 

Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-17 Thread Fox, Kevin M
I re-read this and maybe you mean, some containers will live only outside of 
k8s and some will live in k8s, not that you want to to support not having k8s 
at all with the same code base? That would be a much easier thing, and agree 
ansible would be very good at that.

Thanks,
Kevin

From: Fox, Kevin M
Sent: Monday, July 17, 2017 4:45 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack 
services on Kubernetes

I think if you try to go down the Kubernetes & !Kubernetes path, you'll end up 
re-implementing pretty much all of Kubernetes, or you will use Kubernetes just 
like !Kubernetes and gain very little benefit from it.

Thanks,
Kevin

From: Flavio Percoco [fla...@redhat.com]
Sent: Monday, July 17, 2017 8:12 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack 
services on Kubernetes

On 17/07/17 09:47 -0400, James Slagle wrote:
>On Mon, Jul 17, 2017 at 8:05 AM, Flavio Percoco <fla...@redhat.com> wrote:
>> Thanks for all the feedback so far. This is one of the things I appreciate
>> the
>> most about this community, Open conversations, honest feedback and will to
>> collaborate.
>>
>> I'm top-posting to announce that we'll have a joint meeting with the Kolla
>> team
>> on Wednesday at 16:00 UTC. I know it's not an ideal time for many (it's not
>> for
>> me) but I do want to have a live discussion with the rest of the Kolla team.
>>
>> Some questions about the meeting:
>>
>> * How much time can we allocate?
>> * Can we prepare an agenda rather than just discussing "TripleO is thinking
>> of
>>  using Ansible and not kolla-kubernetes"? (I'm happy to come up with such
>>  agenda)
>
>It may help to prepare some high level requirements around what we
>need out of a solution. For the ansible discussion I started this
>etherpad:
>
>https://etherpad.openstack.org/p/tripleo-ptg-queens-ansible
>
>How we use Ansible and what we want to use it for, is related to this
>discussion around Helm. Although, it's not the exact same discussion,
>so if you wanted to start a new etherpad more specific to
>tripleo/kubernetes that may be good as well.
>
>One thing I think is important in this discussion is that we should be
>thinking about deploying containers on both Kubernetes and
>!Kubernetes. That is one of the reasons I like the ansible approach,
>in that I think it could address both cases with a common interface
>and API. I don't think we should necessarily choose a solution that
>requires to deploy on Kubernetes. Because then we are stuck with that
>choice. It'd be really nice to just "docker run" sometimes for
>dev/test. I don't know if Helm has that abstraction or not, I'm just
>trying to capture the requirement.

Yes!

Thanks for pointing this out as this is one of the reasons why I was proposing
ansible as our common interface w/o any extra layer.

I'll probably start a new etherpad for this as I would prefer not to distract
the rest of the TripleO + ansible discussion. At the end, if ansible ends up
being the tool we pick, I'll make sure to update your etherpad.

Flavio

>If you consider the parallel with Heat in this regard, we are
>currently "stuck" deploying on OpenStack (undercloud with Heat). We've
>had to work an a lot of complimentary features to add the flexibility
>to TripleO that are a result of having to use OpenStack (OVB,
>split-stack).
>
>That's exactly why we are starting a discussion around using Ansible,
>and is one of the fundamental changes that operators have been
>requesting in TripleO.
>
>--
>-- James Slagle
>--
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-17 Thread Fox, Kevin M
I think thats a good question without an easy answer. I think TripleO's own 
struggle with orchestration has shown that its maybe one of the hardest pieces. 
There are a lot of orchestration tools out there. Each has its 
strengths/weaknesses. I  personally can't really pick what the best one is for 
this sort of thing. I've been trying to stay neutral, and let the low level 
kolla-kubernetes components be easily sharable between all the projects that 
already have chosen an orchestration strategy. I think the real answer is 
probably that the best orchestration tool for the job depends entirely on the 
deployment tool. So, TripleO's answer might be different then say, something 
Ubuntu does.

Kolla-kubernetes has implemented reference orchestration a few different ways 
now. We deploy the gates using pure shell. Its not the prettiest way but works 
reliably now. (I would not recommend users do this)

We have a document for manual orchestration.  (slow and very manual, but you 
get to see all the pieces, which can be instructive)

We have helm based orchestration that bundles several microservice charts into 
service charts and deploys similarly to openstack-helm. We built them to test 
the waters of this approach and they do work, but I have doubts they could be 
made robust enough to handle things like live rolling upgrades of OpenStack. It 
may be robust enough to do upgrades that require downtimes. I think it also may 
be hard to debug if the upgrade fails half way through. I admit I could totally 
be wrong though.

Theres also been a couple of ansible based orchestrators that have been 
proposed. They seem to work well, and I think they would be much easier to 
extend to do a live rolling OpenStack upgrade. I'd very much like to see an 
Ansible one finished and kick the tires with it. I do think having both some 
folks in Kolla-Kubernetes and folks in TripleO independently implement k8s 
deployment with it shows there is a lot of potential in that form of 
orchestration and that there's even more room for collaboration between the two 
projects.

Thanks,
Kevin

From: Bogdan Dobrelya [bdobr...@redhat.com]
Sent: Monday, July 17, 2017 1:10 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack 
services on Kubernetes

On 14.07.2017 22:55, Fox, Kevin M wrote:
> Part of the confusion I think is in the different ways helm can be used.
>
> Helm can be used to orchestrate the deployment of a whole service (ex, nova). 
> "launch these 3 k8s objects, template out this config file, run this job to 
> init the db, or this job to upgrade the db, etc", all as a single unit.
>
> It can also be used purely for its templating ability.
>
> So, "render this single k8s object using these values".
>
> This is one of the main differences between openstack-helm and 
> kolla-kubernetes.
>
> Openstack-helm has charts only for orchestrating the deployment of whole 
> openstack services.
>
> Kolla-kubernetes has taken a different track though. While it does use helm 
> for its golang templater, it has taken a microservices approach to be 
> shareable with other tools. So, each openstack process (nova-api, 
> neutron-server, neutron-openvswitch-agent), etc, has its own chart and can be 
> independently configured/placed as needed by an external orchestration 
> system. Kolla-Kubernetes microservice charts are to Kubernetes what 
> Kolla-Containers are to Docker. Reusable building blocks of known tested 
> functionality and assemblable anyway the orchestration system/user feels is 
> in their best interest.

A great summary!
As TripleO Pike docker-based containers architecture rely on
Kolla-Containers bits a lot, which is run-time kolla config/bootstrap
and build-time images overrides, it seems reasonable to continue
following that path by relying on Kolla-Kubernetes microservice Helm
charts for Kubernetes based architecture. Isn't it?

The remaining question is though, if Kolla-kubernetes doesn't consume
the Openstack-helm's opinionated "orchestration of the deployment of
whole openstack services", which tools to use then to fill the advanced
data parameterization gaps like "happens before/after" relationships and
data dependencies/ordering?

>
> This is why I think kolla-kubernetes would be a good fit for TripleO, as you 
> can replace a single component at a time, however you want, using the config 
> files you already have and upgrade the system a piece at a time from non 
> container to containered. It doesn't have to happen all at once, even within 
> a single service, or within a single TripleO release. The orchestration of it 
> is totally up to you, and can be tailored very precisely to deal with the 
> particulars of the upgrade strategy needed by TripleO's existin

Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-17 Thread Fox, Kevin M
I think if you try to go down the Kubernetes & !Kubernetes path, you'll end up 
re-implementing pretty much all of Kubernetes, or you will use Kubernetes just 
like !Kubernetes and gain very little benefit from it.

Thanks,
Kevin

From: Flavio Percoco [fla...@redhat.com]
Sent: Monday, July 17, 2017 8:12 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack 
services on Kubernetes

On 17/07/17 09:47 -0400, James Slagle wrote:
>On Mon, Jul 17, 2017 at 8:05 AM, Flavio Percoco  wrote:
>> Thanks for all the feedback so far. This is one of the things I appreciate
>> the
>> most about this community, Open conversations, honest feedback and will to
>> collaborate.
>>
>> I'm top-posting to announce that we'll have a joint meeting with the Kolla
>> team
>> on Wednesday at 16:00 UTC. I know it's not an ideal time for many (it's not
>> for
>> me) but I do want to have a live discussion with the rest of the Kolla team.
>>
>> Some questions about the meeting:
>>
>> * How much time can we allocate?
>> * Can we prepare an agenda rather than just discussing "TripleO is thinking
>> of
>>  using Ansible and not kolla-kubernetes"? (I'm happy to come up with such
>>  agenda)
>
>It may help to prepare some high level requirements around what we
>need out of a solution. For the ansible discussion I started this
>etherpad:
>
>https://etherpad.openstack.org/p/tripleo-ptg-queens-ansible
>
>How we use Ansible and what we want to use it for, is related to this
>discussion around Helm. Although, it's not the exact same discussion,
>so if you wanted to start a new etherpad more specific to
>tripleo/kubernetes that may be good as well.
>
>One thing I think is important in this discussion is that we should be
>thinking about deploying containers on both Kubernetes and
>!Kubernetes. That is one of the reasons I like the ansible approach,
>in that I think it could address both cases with a common interface
>and API. I don't think we should necessarily choose a solution that
>requires to deploy on Kubernetes. Because then we are stuck with that
>choice. It'd be really nice to just "docker run" sometimes for
>dev/test. I don't know if Helm has that abstraction or not, I'm just
>trying to capture the requirement.

Yes!

Thanks for pointing this out as this is one of the reasons why I was proposing
ansible as our common interface w/o any extra layer.

I'll probably start a new etherpad for this as I would prefer not to distract
the rest of the TripleO + ansible discussion. At the end, if ansible ends up
being the tool we pick, I'll make sure to update your etherpad.

Flavio

>If you consider the parallel with Heat in this regard, we are
>currently "stuck" deploying on OpenStack (undercloud with Heat). We've
>had to work an a lot of complimentary features to add the flexibility
>to TripleO that are a result of having to use OpenStack (OVB,
>split-stack).
>
>That's exactly why we are starting a discussion around using Ansible,
>and is one of the fundamental changes that operators have been
>requesting in TripleO.
>
>--
>-- James Slagle
>--
>
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

--
@flaper87
Flavio Percoco

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-17 Thread Fox, Kevin M
We do support some upstream charts but we started mariadb/rabbit before some of 
the upstream charts were written, so we duplicate a little bit of functionality 
at the moment. You can mix and match though. If an upstream chart doesn't work 
with kolla-kubernetes, I consider that a bug we should fix. Likewise, you 
should be able to run noncontainerized stuff mixed in too. If it doesn't work, 
its likewise a bug. You should be able to run kolla-kubernetes with a baremetal 
db.

Some known working stuff: prometheus/grafana upstream charts start collecting 
data from the containers as soon as they are launched.
I have also tested a bit with the upstream fluent-bit chart and have a ps in 
the works to make it work much better.

Thanks,
Kevin

From: Emilien Macchi [emil...@redhat.com]
Sent: Monday, July 17, 2017 10:13 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack 
services on Kubernetes

On Mon, Jul 17, 2017 at 5:32 AM, Flavio Percoco  wrote:
> On 14/07/17 08:08 -0700, Emilien Macchi wrote:
>>
>> On Fri, Jul 14, 2017 at 2:17 AM, Flavio Percoco  wrote:
>>>
>>>
>>> Greetings,
>>>
>>> As some of you know, I've been working on the second phase of TripleO's
>>> containerization effort. This phase if about migrating the docker based
>>> deployment onto Kubernetes.
>>>
>>> These phase requires work on several areas: Kubernetes deployment,
>>> OpenStack
>>> deployment on Kubernetes, configuration management, etc. While I've been
>>> diving
>>> into all of these areas, this email is about the second point, OpenStack
>>> deployment on Kubernetes.
>>>
>>> There are several tools we could use for this task. kolla-kubernetes,
>>> openstack-helm, ansible roles, among others. I've looked into these tools
>>> and
>>> I've come to the conclusion that TripleO would be better of by having
>>> ansible
>>> roles that would allow for deploying OpenStack services on Kubernetes.
>>>
>>> The existing solutions in the OpenStack community require using Helm.
>>> While
>>> I
>>> like Helm and both, kolla-kubernetes and openstack-helm OpenStack
>>> projects,
>>> I
>>> believe using any of them would add an extra layer of complexity to
>>> TripleO,
>>> which is something the team has been fighting for years years -
>>> especially
>>> now
>>> that the snowball is being chopped off.
>>>
>>> Adopting any of the existing projects in the OpenStack communty would
>>> require
>>> TripleO to also write the logic to manage those projects. For example, in
>>> the
>>> case of openstack-helm, the TripleO team would have to write either
>>> ansible
>>> roles or heat templates to manage - install, remove, upgrade - the charts
>>> (I'm
>>> happy to discuss this point further but I'm keepping it at a high-level
>>> on
>>> purpose for the sake of not writing a 10k-words-long email).
>>>
>>> James Slagle sent an email[0], a couple of days ago, to form TripleO
>>> plans
>>> around ansible. One take-away from this thread is that TripleO is
>>> adopting
>>> ansible more and more, which is great and it fits perfectly with the
>>> conclusion
>>> I reached.
>>>
>>> Now, what this work means is that we would have to write an ansible role
>>> for
>>> each service that will deploy the service on a Kubernetes cluster.
>>> Ideally
>>> these
>>> roles will also generate the configuration files (removing the need of
>>> puppet
>>> entirely) and they would manage the lifecycle. The roles would be
>>> isolated
>>> and
>>> this will reduce the need of TripleO Heat templates. Doing this would
>>> give
>>> TripleO full control on the deployment process too.
>>>
>>> In addition, we could also write Ansible Playbook Bundles to contain
>>> these
>>> roles
>>> and run them using the existing docker-cmd implementation that is coming
>>> out
>>> in
>>> Pike (you can find a PoC/example of this in this repo[1]).
>>>
>>> Now, I do realize the amount of work this implies and that this is my
>>> opinion/conclusion. I'm sending this email out to kick-off the discussion
>>> and
>>> gather thoughts and opinions from the rest of the community.
>>>
>>> Finally, what I really like about writing pure ansible roles is that
>>> ansible
>>> is
>>> a known, powerfull, tool that has been adopted by many operators already.
>>> It'll
>>> provide the flexibility needed and, if structured correctly, it'll allow
>>> for
>>> operators (and other teams) to just use the parts they need/want without
>>> depending on the full-stack. I like the idea of being able to separate
>>> concerns
>>> in the deployment workflow and the idea of making it simple for users of
>>> TripleO
>>> to do the same at runtime. Unfortunately, going down this road means that
>>> my
>>> hope of creating a field where we could collaborate even more with other
>>> deployment tools will be a bit limited but I'm confident the result would
>>> also
>>> be useful 

Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-14 Thread Fox, Kevin M
Part of the confusion I think is in the different ways helm can be used.

Helm can be used to orchestrate the deployment of a whole service (ex, nova). 
"launch these 3 k8s objects, template out this config file, run this job to 
init the db, or this job to upgrade the db, etc", all as a single unit.

It can also be used purely for its templating ability.

So, "render this single k8s object using these values".

This is one of the main differences between openstack-helm and kolla-kubernetes.

Openstack-helm has charts only for orchestrating the deployment of whole 
openstack services.

Kolla-kubernetes has taken a different track though. While it does use helm for 
its golang templater, it has taken a microservices approach to be shareable 
with other tools. So, each openstack process (nova-api, neutron-server, 
neutron-openvswitch-agent), etc, has its own chart and can be independently 
configured/placed as needed by an external orchestration system. 
Kolla-Kubernetes microservice charts are to Kubernetes what Kolla-Containers 
are to Docker. Reusable building blocks of known tested functionality and 
assemblable anyway the orchestration system/user feels is in their best 
interest.

This is why I think kolla-kubernetes would be a good fit for TripleO, as you 
can replace a single component at a time, however you want, using the config 
files you already have and upgrade the system a piece at a time from non 
container to containered. It doesn't have to happen all at once, even within a 
single service, or within a single TripleO release. The orchestration of it is 
totally up to you, and can be tailored very precisely to deal with the 
particulars of the upgrade strategy needed by TripleO's existing deployments.

Does that help to alleviate some of the confusion?

Thanks,
Kevin

From: James Slagle [james.sla...@gmail.com]
Sent: Friday, July 14, 2017 10:26 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack 
services on Kubernetes

On Fri, Jul 14, 2017 at 12:16 PM, Fox, Kevin M <kevin@pnnl.gov> wrote:
> https://xkcd.com/927/

That's cute, but we aren't really trying to have competing standards.
It's not really about competition between tools.

> I don't think adopting helm as a dependency adds more complexity then writing 
> more new k8s object deployment tooling?

That depends, and will likely end up containing a fair amount of
subjectivity. What we're trying to do is explore choices around
tooling.

>
> There are efforts to make it easy to deploy kolla-kubernetes microservice 
> charts using ansible for orchestration in kolla-kubernetes. See:
> https://review.openstack.org/#/c/473588/
> What kolla-kubernetes brings to the table is a tested/shared base k8s object 
> layer. Orchestration is done by ansible via TripleO, and the solutions 
> already found/debugged to how to deploy OpenStack in containers on Kubernetes 
> can be reused/shared.

That's good, and we'd like to reuse existing code and patterns. I
admit to not being super famliliar with kolla-kubernetes. Are there
reusable components without having to also use Helm?

> See for example:
> https://github.com/tripleo-apb/ansible-role-k8s-keystone/blob/331f405bd3f7ad346d99e964538b5b27447a0ebf/provision-keystone-apb/tasks/main.yaml

Pretty sure that was just a POC/example.

>
> I don't see much by way of dealing with fernet token rotation. That was a 
> tricky bit of code to get to work, but kolla-kubernetes has a solution to it. 
> You can get it by: helm install kolla/keystone-fernet-rotate-job.
>
> We designed this layer to be shareable so we all can contribute to the 
> commons rather then having every project reimplement their own and have to 
> chase bugs across all the implementations. The deployment projects will be 
> stronger together if we can share as much as possible.
>
> Please reconsider. I'd be happy to talk with you more if you want.

Just to frame the conversation with a bit more context, I'm sure there
are many individual features/bugs/special handling that TripleO and
Kolla both do today that the other does not.

TripleO had about a 95% solution for deploying OpenStack when
kolla-ansible did not exist and was started from scratch. But, kolla
made a choice based around tooling, which I contend is perfectly valid
given that we are creating deployment tools. Part of the individual
value in each deployment project is the underlying tooling itself.

I think what TripleO is trying to do here is not immediately jump to a
solution that uses Helm and explore what alternatives exist. Even if
the project chooses not to use Helm I still see room for collaboration
on code beneath the Helm/whatever layer.

--
-- James Slagle
--

_

Re: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack services on Kubernetes

2017-07-14 Thread Fox, Kevin M
https://xkcd.com/927/

I don't think adopting helm as a dependency adds more complexity then writing 
more new k8s object deployment tooling?

There are efforts to make it easy to deploy kolla-kubernetes microservice 
charts using ansible for orchestration in kolla-kubernetes. See:
https://review.openstack.org/#/c/473588/
What kolla-kubernetes brings to the table is a tested/shared base k8s object 
layer. Orchestration is done by ansible via TripleO, and the solutions already 
found/debugged to how to deploy OpenStack in containers on Kubernetes can be 
reused/shared.

See for example:
https://github.com/tripleo-apb/ansible-role-k8s-keystone/blob/331f405bd3f7ad346d99e964538b5b27447a0ebf/provision-keystone-apb/tasks/main.yaml

I don't see much by way of dealing with fernet token rotation. That was a 
tricky bit of code to get to work, but kolla-kubernetes has a solution to it. 
You can get it by: helm install kolla/keystone-fernet-rotate-job.

We designed this layer to be shareable so we all can contribute to the commons 
rather then having every project reimplement their own and have to chase bugs 
across all the implementations. The deployment projects will be stronger 
together if we can share as much as possible.

Please reconsider. I'd be happy to talk with you more if you want.

Thanks,
Kevin

From: Flavio Percoco [fla...@redhat.com]
Sent: Friday, July 14, 2017 2:17 AM
To: openstack-dev@lists.openstack.org
Subject: [openstack-dev] [TripleO] Let's use Ansible to deploy OpenStack 
services on Kubernetes

Greetings,

As some of you know, I've been working on the second phase of TripleO's
containerization effort. This phase if about migrating the docker based
deployment onto Kubernetes.

These phase requires work on several areas: Kubernetes deployment, OpenStack
deployment on Kubernetes, configuration management, etc. While I've been diving
into all of these areas, this email is about the second point, OpenStack
deployment on Kubernetes.

There are several tools we could use for this task. kolla-kubernetes,
openstack-helm, ansible roles, among others. I've looked into these tools and
I've come to the conclusion that TripleO would be better of by having ansible
roles that would allow for deploying OpenStack services on Kubernetes.

The existing solutions in the OpenStack community require using Helm. While I
like Helm and both, kolla-kubernetes and openstack-helm OpenStack projects, I
believe using any of them would add an extra layer of complexity to TripleO,
which is something the team has been fighting for years years - especially now
that the snowball is being chopped off.

Adopting any of the existing projects in the OpenStack communty would require
TripleO to also write the logic to manage those projects. For example, in the
case of openstack-helm, the TripleO team would have to write either ansible
roles or heat templates to manage - install, remove, upgrade - the charts (I'm
happy to discuss this point further but I'm keepping it at a high-level on
purpose for the sake of not writing a 10k-words-long email).

James Slagle sent an email[0], a couple of days ago, to form TripleO plans
around ansible. One take-away from this thread is that TripleO is adopting
ansible more and more, which is great and it fits perfectly with the conclusion
I reached.

Now, what this work means is that we would have to write an ansible role for
each service that will deploy the service on a Kubernetes cluster. Ideally these
roles will also generate the configuration files (removing the need of puppet
entirely) and they would manage the lifecycle. The roles would be isolated and
this will reduce the need of TripleO Heat templates. Doing this would give
TripleO full control on the deployment process too.

In addition, we could also write Ansible Playbook Bundles to contain these roles
and run them using the existing docker-cmd implementation that is coming out in
Pike (you can find a PoC/example of this in this repo[1]).

Now, I do realize the amount of work this implies and that this is my
opinion/conclusion. I'm sending this email out to kick-off the discussion and
gather thoughts and opinions from the rest of the community.

Finally, what I really like about writing pure ansible roles is that ansible is
a known, powerfull, tool that has been adopted by many operators already. It'll
provide the flexibility needed and, if structured correctly, it'll allow for
operators (and other teams) to just use the parts they need/want without
depending on the full-stack. I like the idea of being able to separate concerns
in the deployment workflow and the idea of making it simple for users of TripleO
to do the same at runtime. Unfortunately, going down this road means that my
hope of creating a field where we could collaborate even more with other
deployment tools will be a bit limited but I'm confident the result would also
be useful for others and that we all will benefit from it... My 

Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-07-14 Thread Fox, Kevin M
Yeah. Understood. Just was responding to the question, why you would ever want 
to do X. There are reasons. Being out of scope is an ok answer though.

Thanks,
Kevin

From: Amrith Kumar [amrith.ku...@gmail.com]
Sent: Thursday, July 13, 2017 9:22 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

Kevin,

In interests of 'keeping it simple', I'm going to try and prioritize the 
use-cases and pick implementation strategies which target the higher priority 
ones without needlessly excluding other (lower priority) ones.

Thanks,

-amrith

--
Amrith Kumar
​
P.S. Verizon is hiring ​OpenStack engineers nationwide. If you are interested, 
please contact me or visit https://t.co/gGoUzYvqbE


On Wed, Jul 12, 2017 at 5:46 PM, Fox, Kevin M 
<kevin@pnnl.gov<mailto:kevin@pnnl.gov>> wrote:
There is a use case where some sites have folks buy whole bricks of compute 
nodes that get added to the overarching cloud, but using AZ's or 
HostAggregates/Flavors to dedicate the hardware to the users.

You might want to land the db vm on the hardware for that project and one would 
expect the normal quota would be dinged for it rather then a special trove 
quota. Otherwise they may have more quota then the hosts can actually handle.

Thanks,
Kevin

From: Doug Hellmann [d...@doughellmann.com<mailto:d...@doughellmann.com>]
Sent: Wednesday, July 12, 2017 6:57 AM
To: openstack-dev
Subject: Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

Excerpts from Amrith Kumar's message of 2017-07-12 06:14:28 -0500:
> All:
>
> First, let me thank all of you who responded and provided feedback
> on what I wrote. I've summarized what I heard below and am posting
> it as one consolidated response rather than responding to each
> of your messages and making this thread even deeper.
>
> As I say at the end of this email, I will be setting up a session at
> the Denver PTG to specifically continue this conversation and hope
> you will all be able to attend. As soon as time slots for PTG are
> announced, I will try and pick this slot and request that you please
> attend.
>
> 
>
> Thierry: naming issue; call it Hoard if it does not have a migration
> path.
>
> 
>
> Kevin: use a container approach with k8s as the orchestration
> mechanism, addresses multiple issues including performance. Trove to
> provide containers for multiple components which cooperate to provide
> a single instance of a database or cluster. Don't put all components
> (agent, monitoring, database) in a single VM, decoupling makes
> migraiton and upgrades easier and allows trove to reuse database
> vendor supplied containers. Performance of databases in VM's poor
> compared to databases on bare-metal.
>
> 
>
> Doug Hellmann:
>
> > Does "service VM" need to be a first-class thing?  Akanda creates
> > them, using a service user. The VMs are tied to a "router" which is
> > the billable resource that the user understands and interacts with
> > through the API.
>
> Amrith: Doug, yes because we're looking not just for service VM's but all
> resources provisioned by a service. So, to Matt's comment about a
> blackbox DBaaS, the VM's, storage, snapshots, ... they should all be
> owned by the service, charged to a users quota but not visible to the
> user directly.

I still don't understand. If you have entities that represent the
DBaaS "host" or "database" or "database backup" or whatever, then
you put a quota on those entities and you bill for them. If the
database actually runs in a VM or the backup is a snapshot, those
are implementation details. You don't want to have to rewrite your
quota management or billing integration if those details change.

Doug

>
> 
>
> Jay:
>
> > Frankly, I believe all of these types of services should be built
> > as applications that run on OpenStack (or other)
> > infrastructure. In other words, they should not be part of the
> > infrastructure itself.
> >
> > There's really no need for a user of a DBaaS to have access to the
> > host or hosts the DB is running on. If the user really wanted
> > that, they would just spin up a VM/baremetal server and install
> > the thing themselves.
>
> and subsequently in follow-up with Zane:
>
> > Think only in terms of what a user of a DBaaS really wants. At the
> > end of the day, all they want is an address in the cloud where they
> > can point their application to write and read data from.
> > ...
> > At the end of the day, I think Trove is best implemented as a hosted
> > application that exposes

Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-07-12 Thread Fox, Kevin M
There is a use case where some sites have folks buy whole bricks of compute 
nodes that get added to the overarching cloud, but using AZ's or 
HostAggregates/Flavors to dedicate the hardware to the users.

You might want to land the db vm on the hardware for that project and one would 
expect the normal quota would be dinged for it rather then a special trove 
quota. Otherwise they may have more quota then the hosts can actually handle.

Thanks,
Kevin

From: Doug Hellmann [d...@doughellmann.com]
Sent: Wednesday, July 12, 2017 6:57 AM
To: openstack-dev
Subject: Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

Excerpts from Amrith Kumar's message of 2017-07-12 06:14:28 -0500:
> All:
>
> First, let me thank all of you who responded and provided feedback
> on what I wrote. I've summarized what I heard below and am posting
> it as one consolidated response rather than responding to each
> of your messages and making this thread even deeper.
>
> As I say at the end of this email, I will be setting up a session at
> the Denver PTG to specifically continue this conversation and hope
> you will all be able to attend. As soon as time slots for PTG are
> announced, I will try and pick this slot and request that you please
> attend.
>
> 
>
> Thierry: naming issue; call it Hoard if it does not have a migration
> path.
>
> 
>
> Kevin: use a container approach with k8s as the orchestration
> mechanism, addresses multiple issues including performance. Trove to
> provide containers for multiple components which cooperate to provide
> a single instance of a database or cluster. Don't put all components
> (agent, monitoring, database) in a single VM, decoupling makes
> migraiton and upgrades easier and allows trove to reuse database
> vendor supplied containers. Performance of databases in VM's poor
> compared to databases on bare-metal.
>
> 
>
> Doug Hellmann:
>
> > Does "service VM" need to be a first-class thing?  Akanda creates
> > them, using a service user. The VMs are tied to a "router" which is
> > the billable resource that the user understands and interacts with
> > through the API.
>
> Amrith: Doug, yes because we're looking not just for service VM's but all
> resources provisioned by a service. So, to Matt's comment about a
> blackbox DBaaS, the VM's, storage, snapshots, ... they should all be
> owned by the service, charged to a users quota but not visible to the
> user directly.

I still don't understand. If you have entities that represent the
DBaaS "host" or "database" or "database backup" or whatever, then
you put a quota on those entities and you bill for them. If the
database actually runs in a VM or the backup is a snapshot, those
are implementation details. You don't want to have to rewrite your
quota management or billing integration if those details change.

Doug

>
> 
>
> Jay:
>
> > Frankly, I believe all of these types of services should be built
> > as applications that run on OpenStack (or other)
> > infrastructure. In other words, they should not be part of the
> > infrastructure itself.
> >
> > There's really no need for a user of a DBaaS to have access to the
> > host or hosts the DB is running on. If the user really wanted
> > that, they would just spin up a VM/baremetal server and install
> > the thing themselves.
>
> and subsequently in follow-up with Zane:
>
> > Think only in terms of what a user of a DBaaS really wants. At the
> > end of the day, all they want is an address in the cloud where they
> > can point their application to write and read data from.
> > ...
> > At the end of the day, I think Trove is best implemented as a hosted
> > application that exposes an API to its users that is entirely
> > separate from the underlying infrastructure APIs like
> > Cinder/Nova/Neutron.
>
> Amrith: Yes, I agree, +1000
>
> 
>
> Clint (in response to Jay's proposal regarding the service making all
> resources multi-tenant) raised a concern about having multi-tenant
> shared resources. The issue is with ensuring separation between
> tenants (don't want to use the word isolation because this is database
> related).
>
> Amrith: yes, definitely a concern and one that we don't have today
> because each DB is a VM of its own. Personally, I'd rather stick with
> that construct, one DB per VM/container/baremetal and leave that be
> the separation boundary.
>
> 
>
> Zane: Discomfort over throwing out working code, grass is greener on
> the other side, is there anything to salvage?
>
> Amrith: Yes, there is certainly a 'grass is greener with a rewrite'
> fallacy. But, there is stuff that can be salvaged. The elements are
> still good, they are separable and can be used with the new
> project. Much of the controller logic however will fall by the
> wayside.
>
> In a similar vein, Clint asks about the elements that Trove provides,
> "how has that worked out".
>
> Amrith: Honestly, not well. Trove only provided reference 

Re: [openstack-dev] [TripleO] Forming our plans around Ansible

2017-07-10 Thread Fox, Kevin M
I think the migration path to something like kolla-kubernetes would be fine,
as you have total control over the orchestration piece, ansible and the config 
generation
ansible and since it is all containerized and TripleO production isn't, you 
should be able to
'upgrade' from non containtered to containered while leaving alone all the 
existing services as a 
roll back path. Something like read in the old config, tweak it a bit as 
needed, upload as configmaps. then helm install some kolla packages?

Thanks,
Kevin

From: Emilien Macchi [emil...@redhat.com]
Sent: Monday, July 10, 2017 12:14 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO] Forming our plans around Ansible

On Mon, Jul 10, 2017 at 6:19 AM, Steven Hardy  wrote:
[...]
> 1. How to perform end-to-end configuration via ansible (outside of
> heat, but probably still using data and possibly playbooks generated
> by heat)

I guess we're talking about removing Puppet from TripleO and use more
Ansible to manage configuration files.

This is somewhat related to what Flavio (and team) are currently investigating:
https://github.com/flaper87/tripleo-apb-roles/tree/master/keystone-apb

Also see this thread for more context:
http://lists.openstack.org/pipermail/openstack-dev/2017-June/118417.html

We could imagine these apb used by Split Stack 2 by applying the
software configuration (Ansible) to deploy OpenStack on already
deployed baremetal nodes.
One of the challenges here is how do we get the data from Heat to
generate Ansible vars.

[...]

>> I think if we can form some broad agreement before the PTG, we have a
>> chance at making some meaningful progress during Queens.
>
> Agreed, although we probably do need to make some more progress on
> some aspects of this for container minor updates that we'll need for
> Pike.

++ Thanks for bringing this James.

Some other thoughts:
* I also agree that TripleO Quickstart is a separated topic. I was
also confused why OOOQ was templating bash scripts and it has become
clear we needed a way to run exactly the commands in our documentation
without abstraction (please tell me if I'm wrong), therefore we had to
do these templates. We could have been a bit more granular (run cmds
in tasks instead of shell scripts) but I might have missed something
why we didn't do that way.

* Kayobe and Kolla are great tools, though TripleO is looking for a
path to migrate to Ansible in a backward compatible way. Throwing a
third grenade here - I think these tools are too opinionated to allow
us to simply use them. I think we should work toward re-using the
maximum of bits when it makes sense, but folks need to keep in mind we
need to support our existing production deployments, manage upgrades
etc. We're already using some bits from Kolla and our team is already
willing to collaborate with other deployments tools when it makes
sense.

* I agree with some comments in this thread when I read "TripleO would
be a tool to deploy OpenStack Infrastructure as split stacks", like
we're doing in our multinode jobs but even further. I'm interested by
the work done by Flavio and see how we could use Split Stack 2 to
deploy Kubernetes with Ansible (eventually without Mistral calling
Heat calling Mistral calling Ansible).

* It might sound like we want to add more complexity in TripleO but I
confirm James's goal which is a common goal in the team, is to reduce
the number of tools used by TripleO. In other words, we hope we can
e.g. remove Puppet to manage configuration files (which could be done
by Ansible), remove some workflows usually done by Heat but could be
done by Ansible as well, etc. The idea around forming plans to use
Ansible is excellent and we need to converge our efforts together so
we can address some of our operators's feedbacks.

Thanks,
--
Emilien Macchi

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] How to deal with confusion around "hosted projects"

2017-06-29 Thread Fox, Kevin M
Part of the confusion is around what is allowed to use the term openstack and 
the various ways its used.

we have software such as github.com/openstack/openstack-helm,

which is in the openstack namespace, has openstack in its title, but not under 
tc governance. 
http://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml

But it also also has stated its an 'openstack project'.

(not trying to pick on openstack-helm here. just the most recent example I can 
think of that shows off the various ways something can/can't be "openstack" 
today)

So whats really unclear to end users is that when they talk about a piece of 
"openstack" they may be talking about a great many things:
1. is it managed under the 4 opens
2. is it in github.com/openstack.
3. is it under openstack governance.
4. is it an 'openstack project' (what does this mean anymore. I thought that 
was #3, but maybe not?)
5. is "openstack" part of its title

Is a project part of openstack if it meets one of those? all of them? or some 
subset? If we can't answer it, I'm not sure users will ever understand it.

This is separate entirely from the maturity of the software and the level of 
integration with other openstack software issue too. :/

Thanks,
Kevin


From: Tim Bell [tim.b...@cern.ch]
Sent: Thursday, June 29, 2017 12:25 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all][tc] How to deal with confusion around 
"hosted projects"

> On 29 Jun 2017, at 17:35, Chris Friesen  wrote:
>
> On 06/29/2017 09:23 AM, Monty Taylor wrote:
>
>> We are already WELL past where we can solve the problem you are describing.
>> Pandora's box has been opened - we have defined ourselves as an Open 
>> community.
>> Our only requirement to be official is that you behave as one of us. There is
>> nothing stopping those machine learning projects from becoming official. If 
>> they
>> did become official but were still bad software - what would we have solved?
>>
>> We have a long-time official project that currently has staffing problems. If
>> someone Googles for OpenStack DBaaS and finds Trove and then looks to see 
>> that
>> the contribution rate has fallen off recently they could get the impression 
>> that
>> OpenStack is a bunch of dead crap.
>>
>> Inclusion as an Official Project in OpenStack is not an indication that 
>> anyone
>> thinks the project is good quality. That's a decision we actively made. This 
>> is
>> the result.
>
> I wonder if it would be useful to have a separate orthogonal status as to 
> "level of stability/usefulness/maturity/quality" to help newcomers weed out 
> projects that are under TC governance but are not ready for prime time.
>

There is certainly a concern on the operator community as to how viable/useful 
a project is (and how to determine this). Adopting too early makes for a very 
difficult discussion with cloud users who rely on the function.

Can an ‘official’ project be deprecated? The economics say yes. The consumer 
confidence impact would be substantial.

However, home grown solutions where there is common interest implies technical 
debt in the long term.

Tim

> Chris
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO][keystone] Pt. 2 of Passing along some field feedback

2017-06-28 Thread Fox, Kevin M
I think everyone would benefit from a read-only role for keystone out of the 
box. Can we get this into keystone rather then in the various distro's?

Thanks,
Kevin

From: Ben Nemec [openst...@nemebean.com]
Sent: Wednesday, June 28, 2017 12:06 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [TripleO] Pt. 2 of Passing along some field feedback

A few weeks later than I had planned, but here's the other half of the
field feedback I mentioned in my previous email:

* They very emphatically want in-place upgrades to work when moving from
non-containerized to containerized.  I think this is already the plan,
but I told them I'd make sure development was aware of the desire.

* There was also great interest in contributing back some of the custom
templates that they've had to write to get advanced features working in
the field.  Here again we recommended that they start with an RFE so
things could be triaged appropriately.  I'm hoping we can find some
developer time to help polish and shepherd these things through the
review process.

* Policy configuration was discussed, and I pointed them at some recent
work we have done around that:
https://docs.openstack.org/developer/tripleo-docs/advanced_deployment/api_policies.html
  I'm not sure it fully addressed their issues, but I suggested they
take a closer look and provide feedback on any ways it doesn't meet
their needs.

The specific use case they were looking at right now was adding a
read-only role.  They did provide me with a repo containing their
initial work, but unfortunately it's private to Red Hat so I can't share
it here.

* They wanted to be able to maintain separate role files instead of one
monolithic roles_data.yaml.  Apparently they have a pre-deploy script
now that essentially concatenates some individual files to get this
functionality.  I think this has already been addressed by
https://review.openstack.org/#/c/445687

* They've also been looking at ways to reorganize the templates in a
more intuitive fashion.  At first glance the changes seemed reasonable,
but they were still just defining the layout.  I don't know that they've
actually tried to use the reorganized templates yet and given the number
of relative paths in tht I suspect it may be a bigger headache than they
expect, but I thought it was interesting.  There may at least be
elements of this work that we can use to make the templates easier to
understand for deployers.

Thanks.

-Ben

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-22 Thread Fox, Kevin M
No, I'm not necessarily advocating a monolithic approach.

I'm saying that they have decided to start with functionality and accept whats 
needed to get the task done. Theres not really such strong walls between the 
various functionality, rbac/secrets/kublet/etc. They don't spawn off a whole 
new project just to add functionality. they do so only when needed. They also 
don't balk at one feature depending on another.

rbac's important, so they implemented it. ssl cert management was important. so 
they added that. adding a feature that restricts secret downloads only to the 
physical nodes need them, could then reuse the rbac system and ssl cert 
management.

Their sigs are more oriented to features/functionality (or catagories there 
of), not as much specific components. We need to do X. X may involve changes to 
components A and B.

OpenStack now tends to start with A and B and we try and work backwards towards 
implementing X, which is hard due to the strong walls and unclear ownership of 
the feature. And the general solution has been to try and make C but not commit 
to C being in the core so users cant depend on it which hasn't proven to be a 
very successful pattern.

Your right, they are breaking up their code base as needed, like nova did. I'm 
coming around to that being a pretty good approach to some things. starting 
things is simpler, and if it ends up not needing its own whole project, then it 
doesn't get one. if it needs one, then it gets one.  Its not by default, start 
whole new project with db user, db schema, api, scheduler, etc. And the project 
might not end up with daemons split up in exactly the way you would expect if 
you prepoptomized breaking off a project not knowing exactly how it might 
integrate with everything else.

Maybe the porcelain api that's been discussed for a while is part of the 
solution. initial stuff can prototyped/start there and break off as needed to 
separate projects and moved around without the user needing to know where it 
ends up.

Your right that OpenStack's scope is much grater. and think that the commons 
are even more important in that case. If it doesn't have a solid base, every 
project has to re-implement its own base. That takes a huge amount of manpower 
all around. Its not sustainable.

I guess we've gotten pretty far away from discussing Trove at this point.

Thanks,
Kevin

From: Jay Pipes [jaypi...@gmail.com]
Sent: Thursday, June 22, 2017 10:05 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

On 06/22/2017 11:59 AM, Fox, Kevin M wrote:
> My $0.02.
>
> That view of dependencies is why Kubernetes development is outpacing 
> OpenStacks and some users are leaving IMO. Not trying to be mean here but 
> trying to shine some light on this issue.
>
> Kubernetes at its core has essentially something kind of equivalent to 
> keystone (k8s rbac), nova (container mgmt), cinder (pv/pvc/storageclasses), 
> heat with convergence (deployments/daemonsets/etc), barbican (secrets), 
> designate (kube-dns), and octavia (kube-proxy,svc,ingress) in one unit. Ops 
> dont have to work hard to get all of it, users can assume its all there, and 
> devs don't have many silo's to cross to implement features that touch 
> multiple pieces.

I think it's kind of hysterical that you're advocating a monolithic
approach when the thing you're advocating (k8s) is all about enabling
non-monolithic microservices architectures.

Look, the fact of the matter is that OpenStack's mission is larger than
that of Kubernetes. And to say that "Ops don't have to work hard" to get
and maintain a Kubernetes deployment (which, frankly, tends to be dozens
of Kubernetes deployments, one for each tenant/project/namespace) is
completely glossing over the fact that by abstracting away the
infrastructure (k8s' "cloud provider" concept), Kubernetes developers
simply get to ignore some of the hardest and trickiest parts of operations.

So, let's try to compare apples to apples, shall we?

It sounds like the end goal that you're advocating -- more than anything
else -- is an easy-to-install package of OpenStack services that
provides a Kubernetes-like experience for application developers.

I 100% agree with that goal. 100%.

But pulling Neutron, Cinder, Keystone, Designate, Barbican, and Octavia
back into Nova is not the way to do that. You're trying to solve a
packaging and installation problem with a code structure solution.

In fact, if you look at the Kubernetes development community, you see
the *opposite* direction being taken: they have broken out and are
actively breaking out large pieces of the Kubernetes repository/codebase
into separate repositories and addons/plugins. And this is being done to
*accelerate* development of Kubernetes in very much the same way that
splitting services out of Nova was done to accelerat

Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-22 Thread Fox, Kevin M
My $0.02.

That view of dependencies is why Kubernetes development is outpacing OpenStacks 
and some users are leaving IMO. Not trying to be mean here but trying to shine 
some light on this issue.

Kubernetes at its core has essentially something kind of equivalent to keystone 
(k8s rbac), nova (container mgmt), cinder (pv/pvc/storageclasses), heat with 
convergence (deployments/daemonsets/etc), barbican (secrets), designate 
(kube-dns), and octavia (kube-proxy,svc,ingress) in one unit. Ops dont have to 
work hard to get all of it, users can assume its all there, and devs don't have 
many silo's to cross to implement features that touch multiple pieces.

This core functionality being combined has allowed them to land features that 
are really important to users but has proven difficult for OpenStack to do 
because of the silo's. OpenStack's general pattern has been, stand up a new 
service for new feature, then no one wants to depend on it so its ignored and 
each silo reimplements a lesser version of it themselves.

The OpenStack commons then continues to suffer.

We need to stop this destructive cycle.

OpenStack needs to figure out how to increase its commons. Both internally and 
externally. etcd as a common service was a step in the right direction.

I think k8s needs to be another common service all the others can rely on. That 
could greatly simplify the rest of the OpenStack projects as a lot of its 
functionality no longer has to be implemented in each project.

We also need a way to break down the silo walls and allow more cross project 
collaboration for features. I fear the new push for letting projects run 
standalone will make this worse, not better, further fracturing OpenStack.

Thanks,
Kevin

From: Thierry Carrez [thie...@openstack.org]
Sent: Thursday, June 22, 2017 12:58 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

Fox, Kevin M wrote:
> [...]
> If you build a Tessmaster clone just to do mariadb, then you share nothing 
> with the other communities and have to reinvent the wheel, yet again. 
> Operators load increases because the tool doesn't function like other tools.
>
> If you rely on a container orchestration engine that's already cross cloud 
> that can be easily deployed by user or cloud operator, and fill in the gaps 
> with what Trove wants to support, easy management of db's, you get to reuse a 
> lot of the commons and the users slight increase in investment in dealing 
> with the bit of extra plumbing in there allows other things to also be easily 
> added to their cluster. Its very rare that a user would need to deploy/manage 
> only a database. The net load on the operator decreases, not increases.

I think the user-side tool could totally deploy on Kubernetes clusters
-- if that was the only possible target that would make it a Kubernetes
tool more than an open infrastructure tool, but that's definitely a
possibility. I'm not sure work is needed there though, there are already
tools (or charts) doing that ?

For a server-side approach where you want to provide a DB-provisioning
API, I fear that making the functionality depend on K8s would make
TroveV2/Hoard would not only depend on Heat and Nova, but also depend on
something that would deploy a Kubernetes cluster (Magnum?), which would
likely hurt its adoption (and reusability in simpler setups). Since
databases would just work perfectly well in VMs, it feels like a
gratuitous dependency addition ?

We generally need to be very careful about creating dependencies between
OpenStack projects. On one side there are base services (like Keystone)
that we said it was alright to depend on, but depending on anything else
is likely to reduce adoption. Magnum adoption suffers from its
dependency on Heat. If Heat starts depending on Zaqar, we make the
problem worse. I understand it's a hard trade-off: you want to reuse
functionality rather than reinvent it in every project... we just need
to recognize the cost of doing that.

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Required Ceph rbd image features

2017-06-21 Thread Fox, Kevin M
Anyone else seen a problem with kernel rbd when ceph isn't fully up when an 
kernel rbd mount is attempted?

the mount blocks as it should, but if ceph takes too long to start, it 
enventually enters a D state forever even though ceph comes up happpy. Its like 
it times out and stops trying. Only a forced reboot will solve it. :/

This a known issue?

Thanks,
Kevin


From: Jason Dillaman [jdill...@redhat.com]
Sent: Wednesday, June 21, 2017 12:07 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] Required Ceph rbd image features

On Wed, Jun 21, 2017 at 12:32 PM, Jon Bernard  wrote:
> I suspect you'd want to enable layering at minimum.

I'd agree that layering is probably the most you'd want to enable for
krbd-use cases as of today. The v4.9 kernel added support for
exclusive-lock, but that probably doesn't provide much additional
benefit at this point. The striping v2 feature is still not supported
by krbd for non-basic stripe count/unit settings.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-21 Thread Fox, Kevin M
There already is a user side tools for deploying plumbing onto your own cloud. 
stuff like Tessmaster itself.

I think the win is being able to extend that k8s with the ability to 
declaratively request database clusters and manage them.

Its all about the commons.

If you build a Tessmaster clone just to do mariadb, then you share nothing with 
the other communities and have to reinvent the wheel, yet again. Operators load 
increases because the tool doesn't function like other tools.

If you rely on a container orchestration engine that's already cross cloud that 
can be easily deployed by user or cloud operator, and fill in the gaps with 
what Trove wants to support, easy management of db's, you get to reuse a lot of 
the commons and the users slight increase in investment in dealing with the bit 
of extra plumbing in there allows other things to also be easily added to their 
cluster. Its very rare that a user would need to deploy/manage only a database. 
The net load on the operator decreases, not increases.

Look at helm apps for some examples. They do complex web applications that have 
web tiers, database tiers, etc. But they currently suffer from lack of good 
support for clustered databases. In the end, the majority of users care about 
helm install my_scalable_app kind of things rather then installing all the 
things by hand. Its a pain.

OpenStack itself has this issue. It has lots of an api tiers and a db tiers. If 
Trove was a k8s operator, OpenStack on k8s could use it to deploy the rest of 
OpenStack. Even more sharing.

Thanks,
Kevin

From: Thierry Carrez [thie...@openstack.org]
Sent: Wednesday, June 21, 2017 1:52 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

Zane Bitter wrote:
> [...]
> Until then it seems to me that the tradeoff is between decoupling it
> from the particular cloud it's running on so that users can optionally
> deploy it standalone (essentially Vish's proposed solution for the *aaS
> services from many moons ago) vs. decoupling it from OpenStack in
> general so that the operator has more flexibility in how to deploy.
>
> I'd love to be able to cover both - from a user using it standalone to
> spin up and manage a DB in containers on a shared PaaS, through to a
> user accessing it as a service to provide a DB running on a dedicated VM
> or bare metal server, and everything in between. I don't know is such a
> thing is feasible. I suspect we're going to have to talk a lot about VMs
> and network plumbing and volume storage :)

As another data point, we are seeing this very same tradeoff with Magnum
vs. Tessmaster (with "I want to get a Kubernetes cluster" rather than "I
want to get a database").

Tessmaster is the user-side tool from EBay deploying Kubernetes on
different underlying cloud infrastructures: takes a bunch of cloud
credentials, then deploys, grows and shrinks Kubernetes cluster for you.

Magnum is the infrastructure-side tool from OpenStack giving you
COE-as-a-service, through a provisioning API.

Jay is advocating for Trove to be more like Tessmaster, and less like
Magnum. I think I agree with Zane that those are two different approaches:

From a public cloud provider perspective serving lots of small users, I
think a provisioning API makes sense. The user in that case is in a
"black box" approach, so I think the resulting resources should not
really be accessible as VMs by the tenant, even if they end up being
Nova VMs. The provisioning API could propose several options (K8s or
Mesos, MySQL or PostgreSQL).

From a private cloud / hybrid cloud / large cloud user perspective, the
user-side deployment tool, letting you deploy the software on various
types of infrastructure, probably makes more sense. It's probably more
work to run it, but you gain in flexibility. That user-side tool would
probably not support multiple options, but be application-specific.

So yes, ideally we would cover both. Because they target different
users, and both are right...

--
Thierry Carrez (ttx)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

2017-06-19 Thread Fox, Kevin M
Thanks for starting this difficult discussion.

I think I agree with all the lessons learned except  the nova one. while you 
can treat containers and vm's the same, after years of using both, I really 
don't think its a good idea to treat them equally. Containers can't work 
properly if used as a vm. (really, really.)

I agree whole heartedly with your statement that its mostly an orchestration 
problem and should reuse stuff now that there are options.

I would propose the following that I think meets your goals and could widen 
your contributor base substantially:

Look at the Kubernetes (k8s) concept of Operator -> 
https://coreos.com/blog/introducing-operators.html

They allow application specific logic to be added to Kubernetes while reusing 
the rest of k8s to do what its good at. Container Orchestration. etcd is just a 
clustered database and if the operator concept works for it, it should also 
work for other databases such as Gallera.

Where I think the containers/vm thing is incompatible is the thing I think will 
make Trove's life easier. You can think of a member of the database as few 
different components, such as:
 * main database process
 * metrics gatherer (such as https://github.com/prometheus/mysqld_exporter)
 * trove_guest_agent

With the current approach, all are mixed into the same vm image, making it very 
difficult to update the trove_guest_agent without touching the main database 
process. (needed when you upgrade the trove controllers). With the k8s sidecar 
concept, each would be a separate container loaded into the same pod.

So rather then needing to maintain a trove image for every possible combination 
of db version, trove version, etc, you can reuse upstream database containers 
along with trove provided guest agents.

There's a secure channel between kube-apiserver and kubelet so you can reuse it 
for secure communications. No need to add anything for secure communication. 
trove engine -> kubectl exec x-db -c guest_agent some command.

There is k8s federation, so if the operator was started at the federation 
level, it can cross multiple OpenStack regions.

Another big feature I that hasn't been mentioned yet that I think is critical. 
In our performance tests, databases in VM's have never performed particularly 
well. Using k8s as a base, bare metal nodes could be pulled in easily, with 
dedicated disk or ssd's that the pods land on that are very very close to the 
database. This should give native performance.

So, my suggestion would be to strongly consider basing Trove v2 on Kubernetes. 
It can provide a huge bang for the buck, simplifying the Trove architecture 
substantially while gaining the new features your list as being important. The 
Trove v2 OpenStack api can be exposed as a very thin wrapper over k8s Third 
Party Resources (TPR) and would make Trove entirely stateless. k8s maintains 
all state for everything in etcd.

Please consider this architecture.

Thanks,
Kevin


From: Amrith Kumar [amrith.ku...@gmail.com]
Sent: Sunday, June 18, 2017 4:35 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [trove][all][tc] A proposal to rearchitect Trove

Trove has evolved rapidly over the past several years, since integration in 
IceHouse when it only supported single instances of a few databases. Today it 
supports a dozen databases including clusters and replication.

The user survey [1] indicates that while there is strong interest in the 
project, there are few large production deployments that are known of (by the 
development team).

Recent changes in the OpenStack community at large (company realignments, 
acquisitions, layoffs) and the Trove community in particular, coupled with a 
mounting burden of technical debt have prompted me to make this proposal to 
re-architect Trove.

This email summarizes several of the issues that face the project, both 
structurally and architecturally. This email does not claim to include a 
detailed specification for what the new Trove would look like, merely the 
recommendation that the community should come together and develop one so that 
the project can be sustainable and useful to those who wish to use it in the 
future.

TL;DR

Trove, with support for a dozen or so databases today, finds itself in a bind 
because there are few developers, and a code-base with a significant amount of 
technical debt.

Some architectural choices which the team made over the years have consequences 
which make the project less than ideal for deployers.

Given that there are no major production deployments of Trove at present, this 
provides us an opportunity to reset the project, learn from our v1 and come up 
with a strong v2.

An important aspect of making this proposal work is that we seek to eliminate 
the effort (planning, and coding) involved in migrating existing Trove v1 
deployments to the proposed Trove v2. Effectively, with work beginning on Trove 
v2 as 

Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-12 Thread Fox, Kevin M
"Otherwise, -onetime will need to launch new containers each config change." 
You say that like its a bad thing

That sounds like a good feature to me. atomic containers. You always know the 
state of the system. As an Operator, I want to know which containers have the 
new config, which have the old, and which are stuck transitioning so I can fix 
brokenness. If its all hidden inside the containers, its much harder to Operate.

Thanks,
Kevin

From: Paul Belanger [pabelan...@redhat.com]
Sent: Friday, June 09, 2017 10:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] 
[helm] Configuration management with etcd / confd

On Fri, Jun 09, 2017 at 04:52:25PM +, Flavio Percoco wrote:
> On Fri, Jun 9, 2017 at 11:30 AM Britt Houser (bhouser) 
> wrote:
>
> > How does confd run inside the container?  Does this mean we’d need some
> > kind of systemd in every container which would spawn both confd and the
> > real service?  That seems like a very large architectural change.  But
> > maybe I’m misunderstanding it.
> >
> >
> Copying part of my reply to Doug's email:
>
> 1. Run confd + openstack service in side the container. My concern in this
> case
> would be that we'd have to run 2 services inside the container and structure
> things in a way we can monitor both services and make sure they are both
> running. Nothing impossible but one more thing to do.
>
> 2. Run confd `-onetime` and then run the openstack service.
>
>
> I either case, we could run confd as part of the entrypoint and have it run
> in
> background for the case #1 or just run it sequentially for case #2.
>
Both approached are valid, it all depends on your use case.  I suspect in the
case of openstack, you'll be running 2 daemons in your containers. Otherwise,
-onetime will need to launch new containers each config change.

>
> > Thx,
> > britt
> >
> > On 6/9/17, 9:04 AM, "Doug Hellmann"  wrote:
> >
> > Excerpts from Flavio Percoco's message of 2017-06-08 22:28:05 +:
> >
> > > Unless I'm missing something, to use confd with an OpenStack
> > deployment on
> > > k8s, we'll have to do something like this:
> > >
> > > * Deploy confd in every node where we may want to run a pod
> > (basically
> > > wvery node)
> >
> > Oh, no, no. That's not how it works at all.
> >
> > confd runs *inside* the containers. It's input files and command line
> > arguments tell it how to watch for the settings to be used just for
> > that
> > one container instance. It does all of its work (reading templates,
> > watching settings, HUPing services, etc.) from inside the container.
> >
> > The only inputs confd needs from outside of the container are the
> > connection information to get to etcd. Everything else can be put
> > in the system package for the application.
> >
> > Doug
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> > __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-12 Thread Fox, Kevin M
+1 for putting confd in a side car with shared namespaces. much more k8s native.

Still generally -1 on the approach of using confd instead of configmaps. You 
loose all the atomicity that k8s provides with deployments. It breaks 
upgrade/downgrade behavior.

Would it be possible to have confd run in k8s, generate the configmaps, and 
push them to k8s? That might be even more k8s native.

Thanks,
Kevin

From: Bogdan Dobrelya [bdobr...@redhat.com]
Sent: Monday, June 12, 2017 1:07 AM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] 
[helm] Configuration management with etcd / confd

On 09.06.2017 18:51, Flavio Percoco wrote:
>
>
> On Fri, Jun 9, 2017 at 8:07 AM Doug Hellmann  > wrote:
>
> Excerpts from Flavio Percoco's message of 2017-06-08 22:28:05 +:
>
> > Unless I'm missing something, to use confd with an OpenStack
> deployment on
> > k8s, we'll have to do something like this:
> >
> > * Deploy confd in every node where we may want to run a pod (basically
> > wvery node)
>
> Oh, no, no. That's not how it works at all.
>
> confd runs *inside* the containers. It's input files and command line
> arguments tell it how to watch for the settings to be used just for that
> one container instance. It does all of its work (reading templates,
> watching settings, HUPing services, etc.) from inside the container.
>
> The only inputs confd needs from outside of the container are the
> connection information to get to etcd. Everything else can be put
> in the system package for the application.
>
>
> A-ha, ok! I figured this was another option. In this case I guess we
> would have 2 options:
>
> 1. Run confd + openstack service in side the container. My concern in
> this case
> would be that we'd have to run 2 services inside the container and structure
> things in a way we can monitor both services and make sure they are both
> running. Nothing impossible but one more thing to do.
>
> 2. Run confd `-onetime` and then run the openstack service.
>

A sidecar confd container running in a shared pod, which is having a
shared PID namespace with the managed service, would look much more
containerish. So confd could still HUP the service or signal it to be
restarted w/o baking itself into the container image. We have to deal
with the Pod abstraction as we want to be prepared for future
integration with k8s.

>
> Either would work but #2 means we won't have config files monitored and the
> container would have to be restarted to update the config files.
>
> Thanks, Doug.
> Flavio
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>


--
Best regards,
Bogdan Dobrelya,
Irc #bogdando

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-08 Thread Fox, Kevin M
Flavio: I think your right. k8s configmaps and confd are doing very similar 
things. The one thing confd seems to add is dynamic templates on the host side. 
This is still accomplished in k8s with a sidecar watching for config changes 
with the templating engine in it and an emptyDir. or statically with an init 
container and an emptyDir (kolla-kubernetes does the latter)

But, for k8s, I actually prefer a fully atomic container config model, where 
you do a rolling upgrade any time you want to do a configmap change. k8s gives 
you the plumbing to do that and you can more easily roll forward/backward, 
allowing you versioning too.

So, I think your right. etcd/confd is more suited to the non k8s deployments.

Thanks,
Kevin

From: Flavio Percoco [fla...@redhat.com]
Sent: Thursday, June 08, 2017 3:28 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] 
[helm] Configuration management with etcd / confd



On Thu, Jun 8, 2017, 19:14 Doug Hellmann 
> wrote:
Excerpts from Flavio Percoco's message of 2017-06-08 18:27:51 +0200:
> On 08/06/17 18:23 +0200, Flavio Percoco wrote:
> >On 07/06/17 12:04 +0200, Bogdan Dobrelya wrote:
> >>On 06.06.2017 18:08, Emilien Macchi wrote:
> >>>Another benefit is that confd will generate a configuration file when
> >>>the application will start. So if etcd is down *after* the app
> >>>startup, it shouldn't break the service restart if we don't ask confd
> >>>to re-generate the config. It's good for operators who were concerned
> >>>about the fact the infrastructure would rely on etcd. In that case, we
> >>>would only need etcd at the initial deployment (and during lifecycle
> >>>actions like upgrades, etc).
> >>>
> >>>The downside is that in the case of containers, they would still have
> >>>a configuration file within the container, and the whole goal of this
> >>>feature was to externalize configuration data and stop having
> >>>configuration files.
> >>
> >>It doesn't look a strict requirement. Those configs may (and should) be
> >>bind-mounted into containers, as hostpath volumes. Or, am I missing
> >>something what *does* make embedded configs a strict requirement?..
> >
> >mmh, one thing I liked about this effort was possibility of stop 
> >bind-mounting
> >config files into the containers. I'd rather find a way to not need any
> >bindmount and have the services get their configs themselves.
>
> Probably sent too early!
>
> If we're not talking about OpenStack containers running in a COE, I guess this
> is fine. For k8s based deployments, I think I'd prefer having installers
> creating configmaps directly and use that. The reason is that depending on 
> files
> that are in the host is not ideal for these scenarios. I hate this idea 
> because
> it makes deployments inconsistent and I don't want that.
>
> Flavio
>

I'm not sure I understand how a configmap is any different from what is
proposed with confd in terms of deployment-specific data being added to
a container before it launches. Can you elaborate on that?


Unless I'm missing something, to use confd with an OpenStack deployment on k8s, 
we'll have to do something like this:

* Deploy confd in every node where we may want to run a pod (basically wvery 
node)
* Configure it to download all configs from etcd locally (we won't be able to 
download just some of them because we don't know what services may run in 
specific nodes. Except, perhaps, in the case of compute nodes and some other 
similar nodes)
* Enable hostpath volumes (iirc it's disabled by default) so that we can mount 
these files in the pod
* Run the pods and mount the files assuming the files are there.

All of the above is needed because  confd syncs files locally from etcd. Having 
a centralized place to manage these configs allows for controlling the 
deployment better. For example, if a configmap doesn't exist, then stop 
everything.

Not trying to be negative but rather explain why I think confd may not work 
well for the k8s based deployments. I think it's a good fit for the rest of the 
deployments.

Am I missing something? Am I overcomplicating things?

Flavio
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-08 Thread Fox, Kevin M
Because tools to manipulate json and or yaml are very common.

Tools to manipulate a psudo ini file format that isn't standards compliant are 
not. :/

Thanks,
Kevin

From: Doug Hellmann [d...@doughellmann.com]
Sent: Thursday, June 08, 2017 1:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] 
[helm] Configuration management with etcd / confd

> On Jun 8, 2017, at 4:29 PM, Fox, Kevin M <kevin@pnnl.gov> wrote:
>
> That is possible. But, a yaml/json driver might still be good, regardless of 
> the mechanism used to transfer the file.
>
> So the driver abstraction still might be useful.

Why would it be useful to have oslo.config read files in more than one format?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] etcd3 as base service - update

2017-06-08 Thread Fox, Kevin M
See the footer at the bottom of this email.

From: jimi olugboyega [jimiolugboy...@gmail.com]
Sent: Thursday, June 08, 2017 1:19 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [all] etcd3 as base service - update

Hello all,

I am wondering how I can unsubscribe from this mailing list.

Regards,
Olujimi Olugboyega.

On Wed, Jun 7, 2017 at 3:47 AM, Davanum Srinivas 
> wrote:
Team,

Here's the update to the base services resolution from the TC:
https://governance.openstack.org/tc/reference/base-services.html

First request is to Distros, Packagers, Deployers, anyone who
installs/configures OpenStack:
Please make sure you have latest etcd 3.x available in your
environment for Services to use, Fedora already does, we need help in
making sure all distros and architectures are covered.

Any project who want to use etcd v3 API via grpc, please use:
https://pypi.python.org/pypi/etcd3 (works only for non-eventlet services)

Those that depend on eventlet, please use the etcd3 v3alpha HTTP API using:
https://pypi.python.org/pypi/etcd3gw

If you use tooz, there are 2 driver choices for you:
https://github.com/openstack/tooz/blob/master/setup.cfg#L29
https://github.com/openstack/tooz/blob/master/setup.cfg#L30

If you use oslo.cache, there is a driver for you:
https://github.com/openstack/oslo.cache/blob/master/setup.cfg#L33

Devstack installs etcd3 by default and points cinder to it:
http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/etcd3
http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/cinder#n356

Review in progress for keystone to use etcd3 for caching:
https://review.openstack.org/#/c/469621/

Doug is working on proposal(s) for oslo.config to store some
configuration in etcd3:
https://review.openstack.org/#/c/454897/

So, feel free to turn on / test with etcd3 and report issues.

Thanks,
Dims

--
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-08 Thread Fox, Kevin M
There are two issues conflated here maybe?

The first is a mechanism to use oslo.config to dump out example settings that 
could be loaded into a reference ConfigMap or etcd or something. I think there 
is a PS up for that.

The other is a way to get the data back into oslo.config.

etcd is one way.
using a ConfigMap to ship a file into a container to be read by oslo.config 
with a json/yaml/ini file driver is another.

Thanks,
Kevin

From: Emilien Macchi [emil...@redhat.com]
Sent: Thursday, June 08, 2017 1:20 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] 
[helm] Configuration management with etcd / confd

On Thu, Jun 8, 2017 at 8:49 PM, Doug Hellmann  wrote:
> Excerpts from Steven Dake (stdake)'s message of 2017-06-08 17:51:48 +:
>> Doug,
>>
>> In short, a configmap takes a bunch of config files, bundles them in a 
>> kubernetes object called a configmap, and then ships them to etcd.  When a 
>> pod is launched, the pod mounts the configmaps using tmpfs and the raw 
>> config files are available for use by the openstack services.
>
> That sounds like what confd does. Something puts data into one of
> several possible databases. confd takes it out and writes it to
> file(s) when the container starts. The app in the container reads
> the file(s).
>
> It sounds like configmaps would work well, too, it just doesn't
> sound like a fundamentally different solution.

Sorry for my lack of knowledge in ConfigMap but I'm trying to see how
we could bring pieces together.
Doug and I are currently investigating how oslo.config can be useful
to generate the parameters loaded by the application at startup,
without having to manage config with Puppet or Ansible.

If I understand correctly (and if not, please correct me, and maybe
propose something), we could use oslo.config to generate a portion of
ConfigMap (that can be imported in another ConfigMap iiuc) where we
would have parameters for one app.

Example with Keystone:

  apiVersion: v1
  kind: ConfigMap
  metadata:
name: keystone-config
namespace: DEFAULT
  data:
debug: true
rpc_backend: rabbit
... (parameters generated by oslo.config, and data fed by installers)

So iiuc we would give this file to k8s when deploying pods. Parameters
values would be automatically pushed into etcd, and used when
generating the configuration. Am I correct? (I need to understand if
we need to manually manage etcd key/values).

In that case, what deployments tools (like Kolla, TripleO, etc) would
expect from OpenStack to provide (tooling in oslo.config to generate
ConfigMap? etc.

Thanks for your help,

> Doug
>
>>
>> Operating on configmaps is much simpler and safer than using a different 
>> backing database for the configuration data.
>>
>> Hope the information helps.
>>
>> Ping me in #openstack-kolla if you have more questions.
>>
>> Regards
>> -steve
>>
>> -Original Message-
>> From: Doug Hellmann 
>> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
>> 
>> Date: Thursday, June 8, 2017 at 10:12 AM
>> To: openstack-dev 
>> Subject: Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] 
>>[helm] Configuration management with etcd / confd
>>
>> Excerpts from Flavio Percoco's message of 2017-06-08 18:27:51 +0200:
>> > On 08/06/17 18:23 +0200, Flavio Percoco wrote:
>> > >On 07/06/17 12:04 +0200, Bogdan Dobrelya wrote:
>> > >>On 06.06.2017 18:08, Emilien Macchi wrote:
>> > >>>Another benefit is that confd will generate a configuration file 
>> when
>> > >>>the application will start. So if etcd is down *after* the app
>> > >>>startup, it shouldn't break the service restart if we don't ask 
>> confd
>> > >>>to re-generate the config. It's good for operators who were 
>> concerned
>> > >>>about the fact the infrastructure would rely on etcd. In that case, 
>> we
>> > >>>would only need etcd at the initial deployment (and during lifecycle
>> > >>>actions like upgrades, etc).
>> > >>>
>> > >>>The downside is that in the case of containers, they would still 
>> have
>> > >>>a configuration file within the container, and the whole goal of 
>> this
>> > >>>feature was to externalize configuration data and stop having
>> > >>>configuration files.
>> > >>
>> > >>It doesn't look a strict requirement. Those configs may (and should) 
>> be
>> > >>bind-mounted into containers, as hostpath volumes. Or, am I missing
>> > >>something what *does* make embedded configs a strict requirement?..
>> > >
>> > >mmh, one thing I liked about this effort was possibility of stop 
>> bind-mounting
>> > >config files into the containers. I'd rather find a way to not need 
>> any
>> > >bindmount and 

Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-08 Thread Fox, Kevin M
That is possible. But, a yaml/json driver might still be good, regardless of 
the mechanism used to transfer the file.

So the driver abstraction still might be useful.

Thanks,
Kevin

From: Doug Hellmann [d...@doughellmann.com]
Sent: Thursday, June 08, 2017 1:19 PM
To: openstack-dev
Subject: Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla]
[helm] Configuration management with etcd / confd

Excerpts from Fox, Kevin M's message of 2017-06-08 20:08:25 +:
> Yeah, I think k8s configmaps might be a good config mechanism for k8s based 
> openstack deployment.
>
> One feature that might help which is related to the etcd plugin would be a 
> yaml/json plugin. It would allow more native looking configmaps.

We have at least 2 mechanisms for getting config files into containers
without such significant changes to oslo.config.  At this point I'm
not sure it's necessary to do the driver work at all.

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] etcd3 as base service - update

2017-06-08 Thread Fox, Kevin M
hmm... a very interesting question

I would think control plane only.

Thanks,
Kevin

From: Drew Fisher [drew.fis...@oracle.com]
Sent: Thursday, June 08, 2017 1:07 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [all] etcd3 as base service - update

On 6/7/17 4:47 AM, Davanum Srinivas wrote:
> Team,
>
> Here's the update to the base services resolution from the TC:
> https://governance.openstack.org/tc/reference/base-services.html
>
> First request is to Distros, Packagers, Deployers, anyone who
> installs/configures OpenStack:
> Please make sure you have latest etcd 3.x available in your
> environment for Services to use, Fedora already does, we need help in
> making sure all distros and architectures are covered.

As a Solaris OpenStack dev, I have a questions about this change.

If Solaris were to *only* run the nova-compute service, and leave the
rest of the OpenStack services to Linux, is etcd 3.x required on the
compute node for Pike+ ?

Thanks!

-Drew



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla] [helm] Configuration management with etcd / confd

2017-06-08 Thread Fox, Kevin M
Yeah, I think k8s configmaps might be a good config mechanism for k8s based 
openstack deployment.

One feature that might help which is related to the etcd plugin would be a 
yaml/json plugin. It would allow more native looking configmaps.

Thanks,
Kevin

From: Doug Hellmann [d...@doughellmann.com]
Sent: Thursday, June 08, 2017 11:49 AM
To: openstack-dev
Subject: Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla]
[helm] Configuration management with etcd / confd

Excerpts from Steven Dake (stdake)'s message of 2017-06-08 17:51:48 +:
> Doug,
>
> In short, a configmap takes a bunch of config files, bundles them in a 
> kubernetes object called a configmap, and then ships them to etcd.  When a 
> pod is launched, the pod mounts the configmaps using tmpfs and the raw config 
> files are available for use by the openstack services.

That sounds like what confd does. Something puts data into one of
several possible databases. confd takes it out and writes it to
file(s) when the container starts. The app in the container reads
the file(s).

It sounds like configmaps would work well, too, it just doesn't
sound like a fundamentally different solution.

Doug

>
> Operating on configmaps is much simpler and safer than using a different 
> backing database for the configuration data.
>
> Hope the information helps.
>
> Ping me in #openstack-kolla if you have more questions.
>
> Regards
> -steve
>
> -Original Message-
> From: Doug Hellmann 
> Reply-To: "OpenStack Development Mailing List (not for usage questions)" 
> 
> Date: Thursday, June 8, 2017 at 10:12 AM
> To: openstack-dev 
> Subject: Re: [openstack-dev] [deployment] [oslo] [ansible] [tripleo] [kolla]  
>   [helm] Configuration management with etcd / confd
>
> Excerpts from Flavio Percoco's message of 2017-06-08 18:27:51 +0200:
> > On 08/06/17 18:23 +0200, Flavio Percoco wrote:
> > >On 07/06/17 12:04 +0200, Bogdan Dobrelya wrote:
> > >>On 06.06.2017 18:08, Emilien Macchi wrote:
> > >>>Another benefit is that confd will generate a configuration file when
> > >>>the application will start. So if etcd is down *after* the app
> > >>>startup, it shouldn't break the service restart if we don't ask confd
> > >>>to re-generate the config. It's good for operators who were concerned
> > >>>about the fact the infrastructure would rely on etcd. In that case, 
> we
> > >>>would only need etcd at the initial deployment (and during lifecycle
> > >>>actions like upgrades, etc).
> > >>>
> > >>>The downside is that in the case of containers, they would still have
> > >>>a configuration file within the container, and the whole goal of this
> > >>>feature was to externalize configuration data and stop having
> > >>>configuration files.
> > >>
> > >>It doesn't look a strict requirement. Those configs may (and should) 
> be
> > >>bind-mounted into containers, as hostpath volumes. Or, am I missing
> > >>something what *does* make embedded configs a strict requirement?..
> > >
> > >mmh, one thing I liked about this effort was possibility of stop 
> bind-mounting
> > >config files into the containers. I'd rather find a way to not need any
> > >bindmount and have the services get their configs themselves.
> >
> > Probably sent too early!
> >
> > If we're not talking about OpenStack containers running in a COE, I 
> guess this
> > is fine. For k8s based deployments, I think I'd prefer having installers
> > creating configmaps directly and use that. The reason is that depending 
> on files
> > that are in the host is not ideal for these scenarios. I hate this idea 
> because
> > it makes deployments inconsistent and I don't want that.
> >
> > Flavio
> >
>
> I'm not sure I understand how a configmap is any different from what is
> proposed with confd in terms of deployment-specific data being added to
> a container before it launches. Can you elaborate on that?
>
> Doug
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

  1   2   3   4   5   6   7   8   9   >